首页 > 最新文献

Biomedical Signal Processing and Control最新文献

英文 中文
Implementation of FBSE-EWT method in memristive crossbar array framework for automated glaucoma diagnosis from fundus images 在membristive crossbar阵列框架中实现FBSE-EWT方法,利用眼底图像自动诊断青光眼
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-15 DOI: 10.1016/j.bspc.2024.107087
Kumari Jyoti , Saurabh Yadav , Chandrabhan Patel , Mayank Dubey , Pradeep Kumar Chaudhary , Ram Bilas Pachori , Shaibal Mukherjee
Ocular disorders affect over 2.2 billion people globally, with glaucoma being a leading cause of blindness in India. Early detection of glaucoma is crucial as it gradually damages the optic nerve due to increased fluid pressure, leading to vision impairment. This study introduces an innovative approach for glaucoma detection and diagnosis, utilizing two-dimensional Fourier-Bessel series expansion-based empirical wavelet transforms (2D-FBSE-EWT) combined with a memristive crossbar array (MCA) model. The proposed method leverages deep learning and an ensemble EfficientNetb0 based technique to classify fundus images as either normal or glaucomatous. EfficientNetb0 outperforms compared to other convolutional neural networks (CNNs) such as ResNet50, AlexNet, and GoogleNet, making it the optimal choice for glaucoma classification. Initially, the dataset was processed using the integrated MCA with 2D-FBSE-EWT model, and the reconstructed images were used for further classification. The assessment parameters of the reconstructed images demonstrated high quality, with peak signal-to-noise ratio (PSNR) of 26.2346 dB and structural similarity index (SSIM) of 95.38 %. The proposed method achieved an impressive accuracy of 94.15 % using EfficientNetb0. Additionally, it enhanced accuracy and sensitivity by 32.14 % and 40.93 %, respectively, compared to the unprocessed dataset.
全球有超过 22 亿人受到眼部疾病的影响,在印度,青光眼是导致失明的主要原因。早期发现青光眼至关重要,因为青光眼会因液体压力升高而逐渐损伤视神经,导致视力受损。本研究介绍了一种用于青光眼检测和诊断的创新方法,该方法利用基于傅立叶-贝塞尔序列扩展的二维经验小波变换(2D-FBSE-EWT)与忆阻横杆阵列(MCA)模型相结合。所提出的方法利用深度学习和基于 EfficientNetb0 的集合技术将眼底图像分类为正常或青光眼。与其他卷积神经网络(CNN)(如 ResNet50、AlexNet 和 GoogleNet)相比,EfficientNetb0 的表现更为出色,是青光眼分类的最佳选择。最初,数据集使用集成 MCA 与 2D-FBSE-EWT 模型进行处理,重建的图像用于进一步分类。重建图像的评估参数显示了高质量,峰值信噪比(PSNR)为 26.2346 dB,结构相似性指数(SSIM)为 95.38 %。所提出的方法使用 EfficientNetb0 实现了令人印象深刻的 94.15 % 的准确率。此外,与未经处理的数据集相比,该方法的准确度和灵敏度分别提高了 32.14 % 和 40.93 %。
{"title":"Implementation of FBSE-EWT method in memristive crossbar array framework for automated glaucoma diagnosis from fundus images","authors":"Kumari Jyoti ,&nbsp;Saurabh Yadav ,&nbsp;Chandrabhan Patel ,&nbsp;Mayank Dubey ,&nbsp;Pradeep Kumar Chaudhary ,&nbsp;Ram Bilas Pachori ,&nbsp;Shaibal Mukherjee","doi":"10.1016/j.bspc.2024.107087","DOIUrl":"10.1016/j.bspc.2024.107087","url":null,"abstract":"<div><div>Ocular disorders affect over 2.2 billion people globally, with glaucoma being a leading cause of blindness in India. Early detection of glaucoma is crucial as it gradually damages the optic nerve due to increased fluid pressure, leading to vision impairment. This study introduces an innovative approach for glaucoma detection and diagnosis, utilizing two-dimensional Fourier-Bessel series expansion-based empirical wavelet transforms (2D-FBSE-EWT) combined with a memristive crossbar array (MCA) model. The proposed method leverages deep learning and an ensemble EfficientNetb0 based technique to classify fundus images as either normal or glaucomatous. EfficientNetb0 outperforms compared to other convolutional neural networks (CNNs) such as ResNet50, AlexNet, and GoogleNet, making it the optimal choice for glaucoma classification. Initially, the dataset was processed using the integrated MCA with 2D-FBSE-EWT model, and the reconstructed images were used for further classification. The assessment parameters of the reconstructed images demonstrated high quality, with peak signal-to-noise ratio (PSNR) of 26.2346 dB and structural similarity index (SSIM) of 95.38 %. The proposed method achieved an impressive accuracy of 94.15 % using EfficientNetb0. Additionally, it enhanced accuracy and sensitivity by 32.14 % and 40.93 %, respectively, compared to the unprocessed dataset.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107087"},"PeriodicalIF":4.9,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142658296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi Path Heterogeneous Neural Networks: Novel comprehensive classification method of facial nerve function 多路径异构神经网络:新型面神经功能综合分类方法
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-15 DOI: 10.1016/j.bspc.2024.107152
Alan Spark , Jan Kohout , Ludmila Verešpejová , Martin Chovanec , Jan Mareš
This paper introduces a systematic classification of the facial nerve grading system using a comprehensive methodology using a pioneering Multi-Path Heterogeneous Neural Network (MPHNN) method designed for the accurate classification of exercise. It integrates four distinct Convolutional Neural Networks (CNNs) and Custom Feedforward Neural Networks (CFNNs) to enhance the precision of the classification. The CNNs are specifically tailored to scrutinize changes in the coordinates of facial landmarks over time, enabling the capture of both spatial information and temporal patterns in facial expressions during exercise. The CFNNs incorporate patient-specific variables and exercise statistics, including factors such as their surgical history, the type of exercise, its duration, and synthetic features like cumulative movement for each landmark. By leveraging this comprehensive framework, the proposed method offers a nuanced representation of the patient’s exercise performance, thereby facilitating more precise outcomes of a classification.
本文介绍了面神经分级系统的系统分类方法,该方法采用了一种开创性的多路径异构神经网络(MPHNN)方法,专为运动的精确分类而设计。它集成了四个不同的卷积神经网络(CNN)和自定义前馈神经网络(CFNN),以提高分类的精确度。CNN 专门用于仔细检查面部地标的坐标随时间的变化,从而捕捉运动时面部表情的空间信息和时间模式。CFNN 结合了患者的特定变量和运动统计数据,包括手术史、运动类型、持续时间等因素,以及每个地标的累积运动等合成特征。通过利用这一综合框架,所提出的方法能够细致入微地反映患者的运动表现,从而促进更精确的分类结果。
{"title":"Multi Path Heterogeneous Neural Networks: Novel comprehensive classification method of facial nerve function","authors":"Alan Spark ,&nbsp;Jan Kohout ,&nbsp;Ludmila Verešpejová ,&nbsp;Martin Chovanec ,&nbsp;Jan Mareš","doi":"10.1016/j.bspc.2024.107152","DOIUrl":"10.1016/j.bspc.2024.107152","url":null,"abstract":"<div><div>This paper introduces a systematic classification of the facial nerve grading system using a comprehensive methodology using a pioneering Multi-Path Heterogeneous Neural Network (MPHNN) method designed for the accurate classification of exercise. It integrates four distinct Convolutional Neural Networks (CNNs) and Custom Feedforward Neural Networks (CFNNs) to enhance the precision of the classification. The CNNs are specifically tailored to scrutinize changes in the coordinates of facial landmarks over time, enabling the capture of both spatial information and temporal patterns in facial expressions during exercise. The CFNNs incorporate patient-specific variables and exercise statistics, including factors such as their surgical history, the type of exercise, its duration, and synthetic features like cumulative movement for each landmark. By leveraging this comprehensive framework, the proposed method offers a nuanced representation of the patient’s exercise performance, thereby facilitating more precise outcomes of a classification.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"101 ","pages":"Article 107152"},"PeriodicalIF":4.9,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SwinSAM: Fine-grained polyp segmentation in colonoscopy images via segment anything model integrated with a Swin Transformer decoder SwinSAM:通过与 Swin Transformer 解码器集成的分段任何模型对结肠镜图像中的息肉进行精细分割
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-15 DOI: 10.1016/j.bspc.2024.107055
Zhoushan Feng , Yuliang Zhang , Yanhong Chen , Yiyu Shi , Yu Liu , Wen Sun , Lili Du , Dunjin Chen
Polyp segmentation in colonoscopy imagery is a critical procedure in the early detection and preemptive management of colorectal cancer. In facilitating the diagnostic procedures, it is pivotal to attain segmentation with high precision, emphasizing fine-grained details which can potentially harbor crucial information regarding the disease state. To address the prevailing demand for more refined segmentation techniques, this study introduces an innovative framework “SwinSAM”, which ingeniously integrates a Swin Transformer decoder with a SAM encoder. The SAM model has seen over a billion images and possesses a strong capability for image comprehension. However, its training data primarily originates from natural images rather than medical ones. Hence, we designed an adapter module to infuse specific medical domain information into SAM. Furthermore, due to the varying sizes and shapes of polyps, along with their high blending degree with the background, the simplistic convolutional decoder in the original SAM model struggles to accurately segment the intricate details of polyps. This prompted us to utilize the Swin Transformer as the decoder. Additionally, considering the significant shape variations of polyps, we employed a multi-scale perception fusion module to process the deep features extracted by SAM. By using convolutions with different receptive fields, we can extract information about polyps of various shapes. Finally, we optimized the network parameters through multi-level supervision. Comprehensive experiments were conducted on five commonly used polyp segmentation datasets. The results validate that our proposed method achieves good performance across datasets with different polyp backgrounds.
结肠镜成像中的息肉分割是早期检测和预防性治疗结肠直肠癌的关键程序。在促进诊断程序的过程中,关键是要实现高精度的分割,强调细粒度的细节,因为这些细节可能蕴藏着有关疾病状态的关键信息。为了满足对更精细分割技术的普遍需求,本研究引入了一个创新框架 "SwinSAM",它巧妙地将 Swin 变压器解码器与 SAM 编码器集成在一起。SAM 模型已处理过超过十亿幅图像,具有很强的图像理解能力。不过,它的训练数据主要来自自然图像而非医学图像。因此,我们设计了一个适配器模块,为 SAM 注入特定的医学领域信息。此外,由于息肉的大小和形状各不相同,与背景的融合度也很高,原始 SAM 模型中的简单卷积解码器难以准确分割息肉的复杂细节。这促使我们使用斯温变换器作为解码器。此外,考虑到息肉形状的显著变化,我们采用了多尺度感知融合模块来处理 SAM 提取的深度特征。通过使用不同感受野的卷积,我们可以提取各种形状息肉的信息。最后,我们通过多级监督优化了网络参数。我们在五个常用的息肉分割数据集上进行了综合实验。结果验证了我们提出的方法在不同息肉背景的数据集上都能取得良好的性能。
{"title":"SwinSAM: Fine-grained polyp segmentation in colonoscopy images via segment anything model integrated with a Swin Transformer decoder","authors":"Zhoushan Feng ,&nbsp;Yuliang Zhang ,&nbsp;Yanhong Chen ,&nbsp;Yiyu Shi ,&nbsp;Yu Liu ,&nbsp;Wen Sun ,&nbsp;Lili Du ,&nbsp;Dunjin Chen","doi":"10.1016/j.bspc.2024.107055","DOIUrl":"10.1016/j.bspc.2024.107055","url":null,"abstract":"<div><div>Polyp segmentation in colonoscopy imagery is a critical procedure in the early detection and preemptive management of colorectal cancer. In facilitating the diagnostic procedures, it is pivotal to attain segmentation with high precision, emphasizing fine-grained details which can potentially harbor crucial information regarding the disease state. To address the prevailing demand for more refined segmentation techniques, this study introduces an innovative framework “SwinSAM”, which ingeniously integrates a Swin Transformer decoder with a SAM encoder. The SAM model has seen over a billion images and possesses a strong capability for image comprehension. However, its training data primarily originates from natural images rather than medical ones. Hence, we designed an adapter module to infuse specific medical domain information into SAM. Furthermore, due to the varying sizes and shapes of polyps, along with their high blending degree with the background, the simplistic convolutional decoder in the original SAM model struggles to accurately segment the intricate details of polyps. This prompted us to utilize the Swin Transformer as the decoder. Additionally, considering the significant shape variations of polyps, we employed a multi-scale perception fusion module to process the deep features extracted by SAM. By using convolutions with different receptive fields, we can extract information about polyps of various shapes. Finally, we optimized the network parameters through multi-level supervision. Comprehensive experiments were conducted on five commonly used polyp segmentation datasets. The results validate that our proposed method achieves good performance across datasets with different polyp backgrounds.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107055"},"PeriodicalIF":4.9,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142658269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel method for hands rehabilitation using optimal control of fractional order singular system and biological signals 利用分数阶奇异系统优化控制和生物信号进行手部康复的新方法
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-15 DOI: 10.1016/j.bspc.2024.107057
Vahid Safari Dehnavi, Masoud Shafiee
In recent years, significant advances have been made in biological signal processing, allowing for the control of robotic devices. This paper introduces an innovative hand rehabilitation method for improving brain-hand connectivity using a robotic hand based on cognitive robotics. The process begins by recording the user’s electroencephalogram (EEG) and electromyogram (EMG) signals while performing hand movements in two different positions. Next, a method for effective EEG and EMG channel selection is developed, followed by two algorithms for classification of various hand movement patterns. The first algorithm incorporates preprocessing, window selection, feature extraction, and machine learning algorithms. The second algorithm uses automatic feature extraction via optimized CNN-LSTM-SVM. The rehabilitation process is controlled using fractional order singular optimal control based on the identified hand movement patterns and optimal controller design. This control approach is involved in both time-invariant and also time-varying systems. A mathematical model of the constrained rehabilitation process using a robotic hand is derived using fractional order singular theory. The problem of fractional order singular optimal control is solved via a numerical-analytical approach that utilizes Hamiltonian and orthogonal polynomials. A master supervises the entire process, and adjustments are made to each component if the error exceeds a desired threshold. Finally, a simulation is conducted to demonstrate the effectiveness of the proposed method. Conclusions regarding the feasibility and potential advantages of utilizing cognitive robotics-based control for robotic hand rehabilitation are shown.
近年来,生物信号处理技术取得了长足进步,使机器人设备的控制成为可能。本文介绍了一种创新的手部康复方法,利用基于认知机器人技术的机器手改善大脑与手部的连接。该方法首先记录用户在两种不同姿势下进行手部运动时的脑电图(EEG)和肌电图(EMG)信号。接着,开发了一种有效选择脑电图和肌电图通道的方法,然后是两种对各种手部运动模式进行分类的算法。第一种算法包含预处理、窗口选择、特征提取和机器学习算法。第二种算法通过优化的 CNN-LSTM-SVM 自动提取特征。根据识别出的手部运动模式和优化控制器设计,使用分数阶奇异优化控制来控制康复过程。这种控制方法既适用于时变系统,也适用于时变系统。利用分数阶奇异理论推导出了使用机械手进行受限康复过程的数学模型。利用哈密顿和正交多项式的数值分析方法解决了分数阶奇异优化控制问题。主控器对整个过程进行监控,如果误差超过所需的临界值,则对每个组件进行调整。最后,还进行了模拟,以证明所提方法的有效性。结论表明,利用基于认知机器人技术的控制技术进行机器人手康复训练是可行的,并具有潜在优势。
{"title":"A novel method for hands rehabilitation using optimal control of fractional order singular system and biological signals","authors":"Vahid Safari Dehnavi,&nbsp;Masoud Shafiee","doi":"10.1016/j.bspc.2024.107057","DOIUrl":"10.1016/j.bspc.2024.107057","url":null,"abstract":"<div><div>In recent years, significant advances have been made in biological signal processing, allowing for the control of robotic devices. This paper introduces an innovative hand rehabilitation method for improving brain-hand connectivity using a robotic hand based on cognitive robotics. The process begins by recording the user’s electroencephalogram (EEG) and electromyogram (EMG) signals while performing hand movements in two different positions. Next, a method for effective EEG and EMG channel selection is developed, followed by two algorithms for classification of various hand movement patterns. The first algorithm incorporates preprocessing, window selection, feature extraction, and machine learning algorithms. The second algorithm uses automatic feature extraction via optimized CNN-LSTM-SVM. The rehabilitation process is controlled using fractional order singular optimal control based on the identified hand movement patterns and optimal controller design. This control approach is involved in both time-invariant and also time-varying systems. A mathematical model of the constrained rehabilitation process using a robotic hand is derived using fractional order singular theory. The problem of fractional order singular optimal control is solved via a numerical-analytical approach that utilizes Hamiltonian and orthogonal polynomials. A master supervises the entire process, and adjustments are made to each component if the error exceeds a desired threshold. Finally, a simulation is conducted to demonstrate the effectiveness of the proposed method. Conclusions regarding the feasibility and potential advantages of utilizing cognitive robotics-based control for robotic hand rehabilitation are shown.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107057"},"PeriodicalIF":4.9,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing spatial auditory attention decoding with wavelet-based prototype training 利用基于小波的原型训练增强空间听觉注意力解码
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-14 DOI: 10.1016/j.bspc.2024.107130
Zelin Qiu , Jianjun Gu , Dingding Yao , Junfeng Li , Yonghong Yan
The spatial auditory attention decoding (Sp-AAD) technology aims to determine the direction of auditory attention in multi-talker scenarios via neural recordings. Despite the success of recent Sp-AAD algorithms, their performance is hindered by trial-specific features in EEG data. This study aims to improve decoding performance against these features. Studies in neuroscience indicate that spatial auditory attention can be reflected in the topological distribution of EEG energy across different frequency bands. This insight motivates us to propose Prototype Training, a wavelet-based training method for Sp-AAD. This method constructs prototypes with enhanced energy distribution representations and reduced trial-specific characteristics, enabling the model to better capture auditory attention features. To implement prototype training, an EEGWaveNet that employs the wavelet transform of EEG is further proposed. Detailed experiments indicate that the EEGWaveNet with prototype training outperforms other competitive models on various datasets, and the effectiveness of the proposed method is also validated. As a training method independent of model architecture, prototype training offers new insights into the field of Sp-AAD. The source code is available online at: https://github.com/qiuzelinChina/PrototypeTraining.
空间听觉注意力解码(Sp-AAD)技术旨在通过神经记录确定多人交谈场景中的听觉注意力方向。尽管最近的 Sp-AAD 算法取得了成功,但其性能却受到脑电图数据中特定试验特征的阻碍。本研究旨在提高针对这些特征的解码性能。神经科学研究表明,空间听觉注意力可以反映在不同频段脑电图能量的拓扑分布上。这一观点促使我们提出了原型训练(Prototype Training)这一基于小波的 Sp-AAD 训练方法。该方法构建的原型具有增强的能量分布表示和减少的特定试验特征,从而使模型能更好地捕捉听觉注意特征。为实现原型训练,进一步提出了一种采用脑电图小波变换的 EEGWaveNet。详细实验表明,采用原型训练的 EEGWaveNet 在各种数据集上的表现优于其他竞争模型,同时也验证了所提方法的有效性。作为一种独立于模型架构的训练方法,原型训练为 Sp-AAD 领域提供了新的见解。源代码可在线获取:https://github.com/qiuzelinChina/PrototypeTraining。
{"title":"Enhancing spatial auditory attention decoding with wavelet-based prototype training","authors":"Zelin Qiu ,&nbsp;Jianjun Gu ,&nbsp;Dingding Yao ,&nbsp;Junfeng Li ,&nbsp;Yonghong Yan","doi":"10.1016/j.bspc.2024.107130","DOIUrl":"10.1016/j.bspc.2024.107130","url":null,"abstract":"<div><div>The spatial auditory attention decoding (Sp-AAD) technology aims to determine the direction of auditory attention in multi-talker scenarios via neural recordings. Despite the success of recent Sp-AAD algorithms, their performance is hindered by trial-specific features in EEG data. This study aims to improve decoding performance against these features. Studies in neuroscience indicate that spatial auditory attention can be reflected in the topological distribution of EEG energy across different frequency bands. This insight motivates us to propose Prototype Training, a wavelet-based training method for Sp-AAD. This method constructs prototypes with enhanced energy distribution representations and reduced trial-specific characteristics, enabling the model to better capture auditory attention features. To implement prototype training, an EEGWaveNet that employs the wavelet transform of EEG is further proposed. Detailed experiments indicate that the EEGWaveNet with prototype training outperforms other competitive models on various datasets, and the effectiveness of the proposed method is also validated. As a training method independent of model architecture, prototype training offers new insights into the field of Sp-AAD. The source code is available online at: <span><span>https://github.com/qiuzelinChina/PrototypeTraining</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107130"},"PeriodicalIF":4.9,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142657608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An explainable fast deep neural network for emotion recognition 用于情感识别的可解释快速深度神经网络
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-14 DOI: 10.1016/j.bspc.2024.107177
Francesco Di Luzio, Antonello Rosato, Massimo Panella
In the context of artificial intelligence, the inherent human attribute of engaging in logical reasoning to facilitate decision-making is mirrored by the concept of explainability, which pertains to the ability of a model to provide a clear and interpretable account of how it arrived at a particular outcome. This study explores explainability techniques for binary deep neural architectures in the framework of emotion classification through video analysis. We investigate the optimization of input features to binary classifiers for emotion recognition, with face landmarks detection, using an improved version of the Integrated Gradients explainability method. The main contribution of this paper consists of the employment of an innovative explainable artificial intelligence algorithm to understand the crucial facial landmarks movements typical of emotional feeling, using this information for improving the performance of deep learning-based emotion classifiers. By means of explainability, we can optimize the number and the position of the facial landmarks used as input features for facial emotion recognition, lowering the impact of noisy landmarks and thus increasing the accuracy of the developed models. To test the effectiveness of the proposed approach, we considered a set of deep binary models for emotion classification, trained initially with a complete set of facial landmarks, which are progressively reduced basing the decision on a suitable optimization procedure. The obtained results prove the robustness of the proposed explainable approach in terms of understanding the relevance of the different facial points for the different emotions, improving the classification accuracy and diminishing the computational cost.
在人工智能领域,可解释性的概念反映了人类参与逻辑推理以促进决策的固有属性,这一概念涉及模型对其如何得出特定结果提供清晰、可解释的说明的能力。本研究在通过视频分析进行情感分类的框架内,探索二元深度神经架构的可解释性技术。我们使用改进版的 "集成梯度 "可解释性方法,研究了如何优化二元分类器的输入特征,以进行情绪识别和人脸地标检测。本文的主要贡献在于采用了一种创新的可解释人工智能算法来理解典型情绪感受的关键面部地标运动,并利用这些信息来提高基于深度学习的情绪分类器的性能。通过可解释性,我们可以优化作为面部情绪识别输入特征的面部地标的数量和位置,降低噪声地标的影响,从而提高所开发模型的准确性。为了测试所提方法的有效性,我们考虑了一组用于情绪分类的深度二元模型,这些模型最初是用一组完整的面部地标训练的,然后根据适当的优化程序逐步减少这些地标。所获得的结果证明了所提出的可解释方法在理解不同面部点与不同情绪的相关性、提高分类准确性和降低计算成本方面的稳健性。
{"title":"An explainable fast deep neural network for emotion recognition","authors":"Francesco Di Luzio,&nbsp;Antonello Rosato,&nbsp;Massimo Panella","doi":"10.1016/j.bspc.2024.107177","DOIUrl":"10.1016/j.bspc.2024.107177","url":null,"abstract":"<div><div>In the context of artificial intelligence, the inherent human attribute of engaging in logical reasoning to facilitate decision-making is mirrored by the concept of explainability, which pertains to the ability of a model to provide a clear and interpretable account of how it arrived at a particular outcome. This study explores explainability techniques for binary deep neural architectures in the framework of emotion classification through video analysis. We investigate the optimization of input features to binary classifiers for emotion recognition, with face landmarks detection, using an improved version of the Integrated Gradients explainability method. The main contribution of this paper consists of the employment of an innovative explainable artificial intelligence algorithm to understand the crucial facial landmarks movements typical of emotional feeling, using this information for improving the performance of deep learning-based emotion classifiers. By means of explainability, we can optimize the number and the position of the facial landmarks used as input features for facial emotion recognition, lowering the impact of noisy landmarks and thus increasing the accuracy of the developed models. To test the effectiveness of the proposed approach, we considered a set of deep binary models for emotion classification, trained initially with a complete set of facial landmarks, which are progressively reduced basing the decision on a suitable optimization procedure. The obtained results prove the robustness of the proposed explainable approach in terms of understanding the relevance of the different facial points for the different emotions, improving the classification accuracy and diminishing the computational cost.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107177"},"PeriodicalIF":4.9,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142657665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pseudo-label guided selective mutual learning for semi-supervised 3D medical image segmentation 用于半监督三维医学图像分割的伪标签引导选择性相互学习
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-14 DOI: 10.1016/j.bspc.2024.107144
Wenlong Hang , Peng Dai , Chengao Pan , Shuang Liang , Qingfeng Zhang , Qiang Wu , Yukun Jin , Qiong Wang , Jing Qin
Semi-supervised learning (SSL) have shown promising results in 3D medical image segmentation by utilizing both labeled and readily available unlabeled images. Most current SSL methods predict unlabeled data under different perturbations by employing subnetworks with same architecture. Despite their progress, the homogenization of subnetworks limits the diverse predictions on both labeled and unlabeled data, thereby making it difficult for subnetworks to correct each other and giving rise to confirmation bias issue. In this paper, we introduce an SSL framework termed pseudo-label guided selective mutual learning (PLSML), which incorporates two distinct subnetworks and selectively utilizes their derived pseudo-labels for mutual supervision to mitigate the above issue. Specifically, the discrepancies of pseudo-labels from two distinct subnetworks are used to select the regions within labeled images that are prone to missegmentation. We then introduce a mutual discrepancy correction (MDC) regularization to revisit these regions. Moreover, a selective mutual pseudo supervision (SMPS) regularization is introduced to estimate the reliability of pseudo-labels of unlabeled images, and selectively leverage the more reliable pseudo-labels in the two subnetworks to supervise the other one. The integration of MDC and SMPS regularizations facilitates inter-subnetwork mutual correction, consequently mitigating confirmation bias. Extensive experiments on two 3D medical image datasets demonstrate the superiority of our PLSML as compared to state-of-the-art SSL methods. The source code is available online at https://github.com/1pca0/PLSML.
半监督学习(SSL)在三维医学图像分割方面取得了可喜的成果,它既能利用已标记的图像,也能利用随时可用的未标记图像。目前大多数半监督学习方法都是通过采用相同架构的子网络来预测不同扰动下的非标记数据。尽管这些方法取得了进步,但子网络的同质化限制了对有标签和无标签数据的不同预测,从而使子网络难以相互修正,并引发确认偏差问题。在本文中,我们介绍了一种名为伪标签引导的选择性相互学习(PLSML)的 SSL 框架,它包含两个不同的子网络,并选择性地利用其衍生的伪标签进行相互监督,以缓解上述问题。具体来说,利用来自两个不同子网络的伪标签的差异来选择标签图像中容易发生误判的区域。然后,我们引入相互差异校正(MDC)正则化来重新审视这些区域。此外,我们还引入了选择性相互伪监督(SMPS)正则化来估计未标注图像的伪标签的可靠性,并选择性地利用两个子网络中更可靠的伪标签来监督另一个子网络。MDC 和 SMPS 正则化的整合促进了子网络间的相互校正,从而减轻了确认偏差。在两个三维医学图像数据集上进行的大量实验证明,与最先进的 SSL 方法相比,我们的 PLSML 更具优势。源代码可从 https://github.com/1pca0/PLSML 在线获取。
{"title":"Pseudo-label guided selective mutual learning for semi-supervised 3D medical image segmentation","authors":"Wenlong Hang ,&nbsp;Peng Dai ,&nbsp;Chengao Pan ,&nbsp;Shuang Liang ,&nbsp;Qingfeng Zhang ,&nbsp;Qiang Wu ,&nbsp;Yukun Jin ,&nbsp;Qiong Wang ,&nbsp;Jing Qin","doi":"10.1016/j.bspc.2024.107144","DOIUrl":"10.1016/j.bspc.2024.107144","url":null,"abstract":"<div><div>Semi-supervised learning (SSL) have shown promising results in 3D medical image segmentation by utilizing both labeled and readily available unlabeled images. Most current SSL methods predict unlabeled data under different perturbations by employing subnetworks with same architecture. Despite their progress, the homogenization of subnetworks limits the diverse predictions on both labeled and unlabeled data, thereby making it difficult for subnetworks to correct each other and giving rise to confirmation bias issue. In this paper, we introduce an SSL framework termed pseudo-label guided selective mutual learning (PLSML), which incorporates two distinct subnetworks and selectively utilizes their derived pseudo-labels for mutual supervision to mitigate the above issue. Specifically, the discrepancies of pseudo-labels from two distinct subnetworks are used to select the regions within labeled images that are prone to missegmentation. We then introduce a mutual discrepancy correction (MDC) regularization to revisit these regions. Moreover, a selective mutual pseudo supervision (SMPS) regularization is introduced to estimate the reliability of pseudo-labels of unlabeled images, and selectively leverage the more reliable pseudo-labels in the two subnetworks to supervise the other one. The integration of MDC and SMPS regularizations facilitates inter-subnetwork mutual correction, consequently mitigating confirmation bias. Extensive experiments on two 3D medical image datasets demonstrate the superiority of our PLSML as compared to state-of-the-art SSL methods. The source code is available online at <span><span>https://github.com/1pca0/PLSML</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107144"},"PeriodicalIF":4.9,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142657667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian optimization enhanced FKNN model for Parkinson’s diagnosis 用于帕金森诊断的贝叶斯优化增强型 FKNN 模型
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-14 DOI: 10.1016/j.bspc.2024.107142
Mohamed Elkharadly , Khaled Amin , O.M. Abo-Seida , Mina Ibrahim
A progressive neurodegenerative condition that adversely impacts motor skills, speech, and cognitive abilities is Parkinson’s disease (PD). Research has revealed that verbal impediments manifest in the early of PD, making them a potential diagnostic marker. This study introduces an innovative approach, leveraging Bayesian Optimization (BO) to optimize a fuzzy k-nearest neighbor (FKNN) model, enhancing the detection of PD. BO-FKNN was validated on a speech datasets. To comprehensively evaluate the efficacy of the proposed model, BO-FKNN was compared against five commonly used parameter optimization methods, including FKNN based on Particle Swarm Optimization, FKNN based on Genetic algorithm, FKNN based on Bat algorithm, FKNN based on Artificial Bee Colony algorithm, and FKNN based on Grid search. Moreover, to further boost the diagnostic accuracy, a hybrid feature selection method based on Pearson Correlation Coefficient (PCC) and Information Gain (IG) was employed prior to the BO-FKNN method, consequently the PCCIG-BO-FKNN was proposed. The experimental outcomes highlight the superior performance of the proposed system, boasting an impressive classification accuracy of 98.47%.
帕金森病(PD)是一种进行性神经退行性疾病,会对运动技能、语言和认知能力产生不利影响。研究发现,言语障碍在帕金森病早期就会表现出来,因此成为一种潜在的诊断标志。本研究引入了一种创新方法,利用贝叶斯优化(BO)来优化模糊 k 近邻(FKNN)模型,从而提高帕金森病的检测能力。BO-FKNN 在语音数据集上进行了验证。为了全面评估所提出模型的功效,BO-FKNN 与五种常用的参数优化方法进行了比较,包括基于粒子群优化的 FKNN、基于遗传算法的 FKNN、基于蝙蝠算法的 FKNN、基于人工蜂群算法的 FKNN 和基于网格搜索的 FKNN。此外,为了进一步提高诊断准确性,在 BO-FKNN 方法之前采用了基于皮尔逊相关系数(PCC)和信息增益(IG)的混合特征选择方法,因此提出了 PCCIG-BO-FKNN 方法。实验结果凸显了所提系统的卓越性能,分类准确率高达 98.47%。
{"title":"Bayesian optimization enhanced FKNN model for Parkinson’s diagnosis","authors":"Mohamed Elkharadly ,&nbsp;Khaled Amin ,&nbsp;O.M. Abo-Seida ,&nbsp;Mina Ibrahim","doi":"10.1016/j.bspc.2024.107142","DOIUrl":"10.1016/j.bspc.2024.107142","url":null,"abstract":"<div><div>A progressive neurodegenerative condition that adversely impacts motor skills, speech, and cognitive abilities is Parkinson’s disease (PD). Research has revealed that verbal impediments manifest in the early of PD, making them a potential diagnostic marker. This study introduces an innovative approach, leveraging Bayesian Optimization (BO) to optimize a fuzzy k-nearest neighbor (FKNN) model, enhancing the detection of PD. BO-FKNN was validated on a speech datasets. To comprehensively evaluate the efficacy of the proposed model, BO-FKNN was compared against five commonly used parameter optimization methods, including FKNN based on Particle Swarm Optimization, FKNN based on Genetic algorithm, FKNN based on Bat algorithm, FKNN based on Artificial Bee Colony algorithm, and FKNN based on Grid search. Moreover, to further boost the diagnostic accuracy, a hybrid feature selection method based on Pearson Correlation Coefficient (PCC) and Information Gain (IG) was employed prior to the BO-FKNN method, consequently the PCCIG-BO-FKNN was proposed. The experimental outcomes highlight the superior performance of the proposed system, boasting an impressive classification accuracy of 98.47%.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107142"},"PeriodicalIF":4.9,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized attention-based lightweight CNN using particle swarm optimization for brain tumor classification 利用粒子群优化技术优化基于注意力的轻量级 CNN,用于脑肿瘤分类
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-14 DOI: 10.1016/j.bspc.2024.107126
Okan Guder, Yasemin Cetin-Kaya
Timely detection of brain tumors is crucial for developing effective treatment strategies and improving the overall well-being of patients. We introduced an innovative approach in this work for classifying and diagnosing brain tumors with the help of magnetic resonance imaging and a deep learning model. In the proposed method, various attention mechanisms that allow the model to assign different degrees of importance to certain inputs are used, and their performances are compared. Additionally, the Particle Swarm Optimization algorithm is employed to find the optimal hyperparameter values for the Convolutional Neural Network model that incorporates attention mechanisms. A four-class public dataset from the Kaggle website was used to evaluate the effectiveness of the proposed method. A maximum accuracy of 99%, precision of 99.02%, recall of 99%, and F1 score of 99.01% were obtained on the Kaggle test dataset. In addition, to assess the model’s adaptability and robustness, salt-and-pepper noise was introduced to the same test dataset at various rates, and the models’ performance was re-evaluated. A maximum accuracy of 97.78% was obtained on the test data set with 1% noise, 95.04% on the test data set with 2% noise, and 88.10% on the test data set with 3% noise. When the results obtained are analyzed, it is concluded that the proposed model can be successfully used in brain tumor classification and can assist doctors in making diagnostic decisions.
及时发现脑肿瘤对于制定有效的治疗策略和提高患者的整体健康水平至关重要。我们在这项工作中引入了一种创新方法,借助磁共振成像和深度学习模型对脑肿瘤进行分类和诊断。在所提出的方法中,使用了各种关注机制,允许模型对某些输入赋予不同程度的重要性,并对它们的性能进行了比较。此外,还采用了粒子群优化算法,为包含注意力机制的卷积神经网络模型找到最佳超参数值。我们使用了 Kaggle 网站上的一个四类公共数据集来评估所提出方法的有效性。在 Kaggle 测试数据集上获得的最高准确率为 99%,精确率为 99.02%,召回率为 99%,F1 分数为 99.01%。此外,为了评估模型的适应性和鲁棒性,在相同的测试数据集中引入了不同比例的椒盐噪声,并对模型的性能进行了重新评估。在噪声为 1% 的测试数据集上获得了 97.78% 的最高准确率,在噪声为 2% 的测试数据集上获得了 95.04% 的最高准确率,在噪声为 3% 的测试数据集上获得了 88.10% 的最高准确率。对所获得的结果进行分析后得出的结论是,所提出的模型可成功用于脑肿瘤分类,并能帮助医生做出诊断决定。
{"title":"Optimized attention-based lightweight CNN using particle swarm optimization for brain tumor classification","authors":"Okan Guder,&nbsp;Yasemin Cetin-Kaya","doi":"10.1016/j.bspc.2024.107126","DOIUrl":"10.1016/j.bspc.2024.107126","url":null,"abstract":"<div><div>Timely detection of brain tumors is crucial for developing effective treatment strategies and improving the overall well-being of patients. We introduced an innovative approach in this work for classifying and diagnosing brain tumors with the help of magnetic resonance imaging and a deep learning model. In the proposed method, various attention mechanisms that allow the model to assign different degrees of importance to certain inputs are used, and their performances are compared. Additionally, the Particle Swarm Optimization algorithm is employed to find the optimal hyperparameter values for the Convolutional Neural Network model that incorporates attention mechanisms. A four-class public dataset from the Kaggle website was used to evaluate the effectiveness of the proposed method. A maximum accuracy of 99%, precision of 99.02%, recall of 99%, and F1 score of 99.01% were obtained on the Kaggle test dataset. In addition, to assess the model’s adaptability and robustness, salt-and-pepper noise was introduced to the same test dataset at various rates, and the models’ performance was re-evaluated. A maximum accuracy of 97.78% was obtained on the test data set with 1% noise, 95.04% on the test data set with 2% noise, and 88.10% on the test data set with 3% noise. When the results obtained are analyzed, it is concluded that the proposed model can be successfully used in brain tumor classification and can assist doctors in making diagnostic decisions.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107126"},"PeriodicalIF":4.9,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142658268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-perspective feature compensation enhanced network for medical image segmentation 用于医学图像分割的多视角特征补偿增强网络
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-13 DOI: 10.1016/j.bspc.2024.107099
Chengzhang Zhu , Renmao Zhang , Yalong Xiao , Beiji Zou , Zhangzheng Yang , Jianfeng Li , Xinze Li
Medical image segmentation’s accuracy is crucial for clinical analysis and diagnosis. Despite progress with U-Net-inspired models, they often underuse multi-scale convolutional layers crucial for enhancing detailing visual features and overlooking the importance of merging multi-scale features within the channel dimension to enhance decoder complexity. To address these limitations, we introduce a Multi-perspective Feature Compensation Enhanced Network (MFCNet) for medical image segmentation. Our network design is characterized by the strategic employment of dual-scale convolutional kernels at each encoder level. This synergy enables the precise capture of both granular and broader context features throughout the encoding phase. We further enhance the model by integrating a Dual-scale Channel-wise Cross-fusion Transformer (DCCT) mechanism within the skip connections. This innovation effectively integrates dual-scale features. We subsequently implemented the spatial attention (SA) mechanism to amplify the saliency areas within the dual-scale features. These enhanced features were subsequently merged with the feature map of the same level in the decoder, thereby augmenting the overall feature representation. Our proposed MFCNet has been evaluated on three distinct medical image datasets, and the experimental results demonstrate that it achieves more accurate segmentation performance and adaptability to varying target segmentation, making it more competitive compared to existing methods. The code is available at: https://github.com/zrm-code/MFCNet.
医学图像分割的准确性对于临床分析和诊断至关重要。尽管受 U-Net 启发的模型取得了进展,但它们往往没有充分利用多尺度卷积层来增强细节视觉特征,而忽视了在通道维度内合并多尺度特征以提高解码器复杂性的重要性。为了解决这些局限性,我们为医学图像分割引入了多视角特征补偿增强网络(MFCNet)。我们的网络设计特点是在每个编码器级别战略性地使用双尺度卷积核。这种协同作用可在整个编码阶段精确捕捉颗粒特征和更广泛的上下文特征。通过在跳转连接中集成双尺度信道交叉融合转换器(DCCT)机制,我们进一步增强了该模型。这一创新有效地整合了双尺度特征。随后,我们实施了空间注意力(SA)机制,以放大双尺度特征中的显著性区域。这些增强的特征随后与解码器中同级别的特征图合并,从而增强了整体特征表示。我们提出的 MFCNet 在三个不同的医学图像数据集上进行了评估,实验结果表明,它能获得更准确的分割性能,并能适应不同的目标分割,与现有方法相比更具竞争力。代码见:https://github.com/zrm-code/MFCNet。
{"title":"Multi-perspective feature compensation enhanced network for medical image segmentation","authors":"Chengzhang Zhu ,&nbsp;Renmao Zhang ,&nbsp;Yalong Xiao ,&nbsp;Beiji Zou ,&nbsp;Zhangzheng Yang ,&nbsp;Jianfeng Li ,&nbsp;Xinze Li","doi":"10.1016/j.bspc.2024.107099","DOIUrl":"10.1016/j.bspc.2024.107099","url":null,"abstract":"<div><div>Medical image segmentation’s accuracy is crucial for clinical analysis and diagnosis. Despite progress with U-Net-inspired models, they often underuse multi-scale convolutional layers crucial for enhancing detailing visual features and overlooking the importance of merging multi-scale features within the channel dimension to enhance decoder complexity. To address these limitations, we introduce a Multi-perspective Feature Compensation Enhanced Network (MFCNet) for medical image segmentation. Our network design is characterized by the strategic employment of dual-scale convolutional kernels at each encoder level. This synergy enables the precise capture of both granular and broader context features throughout the encoding phase. We further enhance the model by integrating a Dual-scale Channel-wise Cross-fusion Transformer (DCCT) mechanism within the skip connections. This innovation effectively integrates dual-scale features. We subsequently implemented the spatial attention (SA) mechanism to amplify the saliency areas within the dual-scale features. These enhanced features were subsequently merged with the feature map of the same level in the decoder, thereby augmenting the overall feature representation. Our proposed MFCNet has been evaluated on three distinct medical image datasets, and the experimental results demonstrate that it achieves more accurate segmentation performance and adaptability to varying target segmentation, making it more competitive compared to existing methods. The code is available at: <span><span>https://github.com/zrm-code/MFCNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107099"},"PeriodicalIF":4.9,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142657661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomedical Signal Processing and Control
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1