首页 > 最新文献

Biomedical Signal Processing and Control最新文献

英文 中文
Automated measurement of aortic parameters using deep learning and computer vision 使用深度学习和计算机视觉自动测量主动脉参数
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-01-29 DOI: 10.1016/j.bspc.2026.109673
Ivan Blekanov , Gleb Kim , Fedor Ezhov , Evgenii Larin , Lev Kovalenko , Anthony Nwohiri , Egor Razumilov
Advancements in artificial intelligence are rapidly transforming healthcare, including the diagnosis of aortic aneurysms, which relies on precise measurement of aortic parameters from CT scans. Current manual methods are time-consuming and require expert surgeons, making automation essential. Accurate automation depends on robust aortic semantic segmentation, cross-section reconstruction, and parameter extraction. Existing 2D segmentation models achieve Dice similarity coefficients (DSC) of 0.842–0.890, while 3D models reach 0.750–0.950. Despite the generally high segmentation accuracy, 3D models require substantial computational resources for both training and inference. This presents a substantial challenge for clinical deployment, especially in developing countries. Our research bridges this gap by advancing state-of-the-art 2D deep learning techniques for aortic semantic segmentation on CT scans. In this regard, we developed a pipeline leveraging novel neural network (NN) architectures and computer vision (CV) techniques. Various high-performing semantic segmentation NNs were rigorously compared. The best NNs (such as VAN-S-UNet, rViT-UNet (TransUNet), MiT-B2-UNet) achieved a DSC of 0.938–0.976 for open datasets, and 0.912 for our dataset of 50 aortic CT scans. The proposed pipeline automates the main stages of CT image processing, from raw CT scan data to quantitative aortic assessment, extracting clinically relevant parameters such as cross-sectional area, border length, and major and minor diameters for subsequent pathology diagnosis and informed clinical decision-making. Case study experiments show minor deviations between the results of the proposed method and expert assessments: approximately 5% for perimeter, 6% for major diameter, 10% for minor diameter, and 15% for cross-sectional area measurement.
人工智能的进步正在迅速改变医疗保健,包括主动脉瘤的诊断,这依赖于从CT扫描中精确测量主动脉参数。目前的手工方法既耗时又需要专业的外科医生,因此自动化是必不可少的。准确的自动化依赖于强大的主动脉语义分割、横截面重建和参数提取。现有二维分割模型的Dice相似系数(DSC)为0.842-0.890,三维模型达到0.750-0.950。尽管分割精度普遍较高,但3D模型的训练和推理都需要大量的计算资源。这对临床部署提出了重大挑战,特别是在发展中国家。我们的研究通过推进最先进的2D深度学习技术,在CT扫描上进行主动脉语义分割,弥合了这一差距。在这方面,我们开发了一个利用新型神经网络(NN)架构和计算机视觉(CV)技术的管道。对各种高性能的语义分割神经网络进行了严格的比较。最好的神经网络(如VAN-S-UNet, rvitt - unet (TransUNet), MiT-B2-UNet)在开放数据集上的DSC为0.938-0.976,在我们的50个主动脉CT扫描数据集上的DSC为0.912。提出的管道自动化了CT图像处理的主要阶段,从原始CT扫描数据到定量主动脉评估,提取临床相关参数,如横断面积、边界长度、主要和次要直径,用于后续病理诊断和知情的临床决策。案例研究实验表明,所提出方法的结果与专家评估之间的偏差很小:周长约为5%,大直径约为6%,小直径约为10%,横截面积测量约为15%。
{"title":"Automated measurement of aortic parameters using deep learning and computer vision","authors":"Ivan Blekanov ,&nbsp;Gleb Kim ,&nbsp;Fedor Ezhov ,&nbsp;Evgenii Larin ,&nbsp;Lev Kovalenko ,&nbsp;Anthony Nwohiri ,&nbsp;Egor Razumilov","doi":"10.1016/j.bspc.2026.109673","DOIUrl":"10.1016/j.bspc.2026.109673","url":null,"abstract":"<div><div>Advancements in artificial intelligence are rapidly transforming healthcare, including the diagnosis of aortic aneurysms, which relies on precise measurement of aortic parameters from CT scans. Current manual methods are time-consuming and require expert surgeons, making automation essential. Accurate automation depends on robust aortic semantic segmentation, cross-section reconstruction, and parameter extraction. Existing 2D segmentation models achieve Dice similarity coefficients (DSC) of 0.842–0.890, while 3D models reach 0.750–0.950. Despite the generally high segmentation accuracy, 3D models require substantial computational resources for both training and inference. This presents a substantial challenge for clinical deployment, especially in developing countries. Our research bridges this gap by advancing state-of-the-art 2D deep learning techniques for aortic semantic segmentation on CT scans. In this regard, we developed a pipeline leveraging novel neural network (NN) architectures and computer vision (CV) techniques. Various high-performing semantic segmentation NNs were rigorously compared. The best NNs (such as VAN-S-UNet, rViT-UNet (TransUNet), MiT-B2-UNet) achieved a DSC of 0.938–0.976 for open datasets, and 0.912 for our dataset of 50 aortic CT scans. The proposed pipeline automates the main stages of CT image processing, from raw CT scan data to quantitative aortic assessment, extracting clinically relevant parameters such as cross-sectional area, border length, and major and minor diameters for subsequent pathology diagnosis and informed clinical decision-making. Case study experiments show minor deviations between the results of the proposed method and expert assessments: approximately 5% for perimeter, 6% for major diameter, 10% for minor diameter, and 15% for cross-sectional area measurement.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109673"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SDFusion: Fractional-order structural tensor-guided dynamic frequency enhancement network for medical image fusion SDFusion:用于医学图像融合的分数阶结构张量导向动态频率增强网络
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-01-27 DOI: 10.1016/j.bspc.2026.109682
Qinke Yu, Yuanjun Wang
Medical image fusion (MIF) seeks to complement and enhance functional and structural information. However, challenges remain in fully preserving modality-specific features while effectively integrating shared features. Furthermore, static convolutional kernels lack adaptability to input frequency distributions, limiting the extraction of key features. To address these challenges, this study proposes a fractional-order structural tensor-guided dynamic frequency enhancement network (SDFusion) for MIF. Specifically, the network is built on a triple-branch architecture comprising an anatomical branch, a functional branch, and a shared branch. The first two capture modality-specific features, and the latter explores common features across modalities, all operating in a synergistic manner for efficient interactive feature extraction. To enhance feature representation, we design dynamic frequency convolutions to adaptively capture key information across different frequency bands. Subsequently, the introduction of a cross attention and a spatially guided dual-domain channel attention fusion mechanism enhances feature interaction, ensuring the preservation of anatomical edges and functional hotspots. Moreover, we design an adaptive loss function weight allocation strategy based on fractional-order structural tensors. This strategy maximizes the retention of structural details and functional information, thereby improving the fidelity of the fused images. Extensive experiments on the Harvard Medical datasets show that SDFusion surpasses existing state-of-the-art methods in terms of visualization and objective evaluation.
医学图像融合(MIF)旨在补充和增强功能和结构信息。然而,在有效集成共享特性的同时,如何完全保留特定于模态的特性仍然存在挑战。此外,静态卷积核缺乏对输入频率分布的适应性,限制了关键特征的提取。为了解决这些挑战,本研究提出了一种用于MIF的分数阶结构张量引导的动态频率增强网络(SDFusion)。具体来说,该网络建立在三分支架构上,包括解剖分支、功能分支和共享分支。前两个捕获特定于模态的特征,后一个探索跨模态的共同特征,所有这些都以协同方式操作,以实现高效的交互式特征提取。为了增强特征表示,我们设计了动态频率卷积来自适应捕获不同频段的关键信息。随后,引入了交叉注意和空间引导双域通道注意融合机制,增强了特征交互,确保了解剖边缘和功能热点的保留。此外,我们还设计了一种基于分数阶结构张量的自适应损失函数权重分配策略。该策略最大限度地保留了结构细节和功能信息,从而提高了融合图像的保真度。在哈佛医学数据集上进行的大量实验表明,SDFusion在可视化和客观评估方面超越了现有的最先进的方法。
{"title":"SDFusion: Fractional-order structural tensor-guided dynamic frequency enhancement network for medical image fusion","authors":"Qinke Yu,&nbsp;Yuanjun Wang","doi":"10.1016/j.bspc.2026.109682","DOIUrl":"10.1016/j.bspc.2026.109682","url":null,"abstract":"<div><div>Medical image fusion (MIF) seeks to complement and enhance functional and structural information. However, challenges remain in fully preserving modality-specific features while effectively integrating shared features. Furthermore, static convolutional kernels lack adaptability to input frequency distributions, limiting the extraction of key features. To address these challenges, this study proposes a fractional-order structural tensor-guided dynamic frequency enhancement network (SDFusion) for MIF. Specifically, the network is built on a triple-branch architecture comprising an anatomical branch, a functional branch, and a shared branch. The first two capture modality-specific features, and the latter explores common features across modalities, all operating in a synergistic manner for efficient interactive feature extraction. To enhance feature representation, we design dynamic frequency convolutions to adaptively capture key information across different frequency bands. Subsequently, the introduction of a cross attention and a spatially guided dual-domain channel attention fusion mechanism enhances feature interaction, ensuring the preservation of anatomical edges and functional hotspots. Moreover, we design an adaptive loss function weight allocation strategy based on fractional-order structural tensors. This strategy maximizes the retention of structural details and functional information, thereby improving the fidelity of the fused images. Extensive experiments on the Harvard Medical datasets show that SDFusion surpasses existing state-of-the-art methods in terms of visualization and objective evaluation.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109682"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum to “A novel imbalanced dataset mitigation method and ECG classification model based on combined 1D_CBAM-autoencoder and lightweight CNN model” [Biomed. Sig. Process. Control 87 (2024) 105437] “一种新的不平衡数据缓解方法和基于1d_cam -自动编码器和轻量级CNN模型的心电分类模型”[生物医学杂志]。团体。过程。管制87 (2024)105437]
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-02-03 DOI: 10.1016/j.bspc.2026.109625
Zhikang Chen , Danni Yang , Tianrui Cui , Ding Li , Houfang Liu , Yi Yang , Sheng Zhang , Sifan Yang , Tian-Ling Ren
{"title":"Corrigendum to “A novel imbalanced dataset mitigation method and ECG classification model based on combined 1D_CBAM-autoencoder and lightweight CNN model” [Biomed. Sig. Process. Control 87 (2024) 105437]","authors":"Zhikang Chen ,&nbsp;Danni Yang ,&nbsp;Tianrui Cui ,&nbsp;Ding Li ,&nbsp;Houfang Liu ,&nbsp;Yi Yang ,&nbsp;Sheng Zhang ,&nbsp;Sifan Yang ,&nbsp;Tian-Ling Ren","doi":"10.1016/j.bspc.2026.109625","DOIUrl":"10.1016/j.bspc.2026.109625","url":null,"abstract":"","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109625"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum to “Temporal and topographic effects of longer auditory stimuli on slow oscillations during slow wave sleep” [Biomed. Sig. Process. Control 112(Part D) (2026) 108649] “长时间听觉刺激对慢波睡眠期间慢振荡的时间和地形影响”的勘误表[生物医学]。团体。过程。第112号管制(D部)(2026)[108649]
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-02-03 DOI: 10.1016/j.bspc.2026.109701
Marek Piorecký , Filip Černý , Václava Piorecká , Daniela Dudysová , Jana Kopřivová
{"title":"Corrigendum to “Temporal and topographic effects of longer auditory stimuli on slow oscillations during slow wave sleep” [Biomed. Sig. Process. Control 112(Part D) (2026) 108649]","authors":"Marek Piorecký ,&nbsp;Filip Černý ,&nbsp;Václava Piorecká ,&nbsp;Daniela Dudysová ,&nbsp;Jana Kopřivová","doi":"10.1016/j.bspc.2026.109701","DOIUrl":"10.1016/j.bspc.2026.109701","url":null,"abstract":"","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109701"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147385854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The effect of acute stress on the interpretability and generalization of schizophrenia predictive machine learning models 急性应激对精神分裂症预测机器学习模型的可解释性和泛化的影响
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-02-03 DOI: 10.1016/j.bspc.2026.109708
Gideon Vos , Maryam Ebrahimpour , Liza van Eijk , Zoltan Sarnyai , Mostafa Rahimi Azghadi
Schizophrenia is a severe mental disorder, and early diagnosis is essential for improving patient outcomes. While ongoing research continues to advance understanding, the disorder’s complexity still limits accurate prediction. Combining electroencephalography (EEG) with machine learning (ML) has shown promise in aiding diagnosis, but overlapping mental health conditions such as stress can reduce the interpretability and reliability of ML models. This study utilized ML models using open EEG datasets to predict schizophrenia and investigate the impact of acute stress response during EEG recordings on model performance. Experiments utilized three open EEG datasets: acute stress, schizophrenia recorded at rest and schizophrenia recorded during tasks, both with healthy control subjects included. Four XGBoost-based classification models were developed: (1) acute stress, (2) schizophrenia at rest, (3) during tasks, and (4) a 3-class model that combined healthy controls with both schizophrenia groups. Explainable AI techniques were applied to further evaluate model performance against known schizophrenia brain region and frequency domain markers. A novel EEG artifact adjustment method for stress compensation was applied, and model performance re-evaluated. Results showed that acute stress response significantly affected EEG recordings and ML model accuracy, with compensation for acute stress improving model generalization. Findings underscore the need for rigorous health screening, artifact processing and stress response management during EEG recordings to ensure high-quality data for ML models.
精神分裂症是一种严重的精神障碍,早期诊断对改善患者预后至关重要。虽然正在进行的研究继续推进对这种疾病的理解,但它的复杂性仍然限制了准确的预测。将脑电图(EEG)与机器学习(ML)相结合在辅助诊断方面显示出希望,但重叠的精神健康状况(如压力)会降低ML模型的可解释性和可靠性。本研究利用开放脑电图数据集的ML模型来预测精神分裂症,并研究脑电图记录期间急性应激反应对模型性能的影响。实验使用三个开放的脑电图数据集:急性应激、休息时记录的精神分裂症和任务时记录的精神分裂症,均包括健康对照受试者。建立了四种基于xgboost的分类模型:(1)急性应激,(2)休息时精神分裂症,(3)任务时精神分裂症,以及(4)结合健康对照组和两组精神分裂症的3类模型。可解释的人工智能技术被应用于进一步评估模型对已知的精神分裂症脑区域和频域标记的性能。提出了一种新的脑电信号伪影调整方法进行应力补偿,并对模型性能进行了重新评价。结果表明,急性应激反应显著影响脑电记录和ML模型的准确性,急性应激补偿提高了模型的泛化程度。研究结果强调了在EEG记录期间进行严格的健康筛查、伪影处理和应激反应管理的必要性,以确保ML模型的高质量数据。
{"title":"The effect of acute stress on the interpretability and generalization of schizophrenia predictive machine learning models","authors":"Gideon Vos ,&nbsp;Maryam Ebrahimpour ,&nbsp;Liza van Eijk ,&nbsp;Zoltan Sarnyai ,&nbsp;Mostafa Rahimi Azghadi","doi":"10.1016/j.bspc.2026.109708","DOIUrl":"10.1016/j.bspc.2026.109708","url":null,"abstract":"<div><div>Schizophrenia is a severe mental disorder, and early diagnosis is essential for improving patient outcomes. While ongoing research continues to advance understanding, the disorder’s complexity still limits accurate prediction. Combining electroencephalography (EEG) with machine learning (ML) has shown promise in aiding diagnosis, but overlapping mental health conditions such as stress can reduce the interpretability and reliability of ML models. This study utilized ML models using open EEG datasets to predict schizophrenia and investigate the impact of acute stress response during EEG recordings on model performance. Experiments utilized three open EEG datasets: acute stress, schizophrenia recorded at rest and schizophrenia recorded during tasks, both with healthy control subjects included. Four XGBoost-based classification models were developed: (1) acute stress, (2) schizophrenia at rest, (3) during tasks, and (4) a 3-class model that combined healthy controls with both schizophrenia groups. Explainable AI techniques were applied to further evaluate model performance against known schizophrenia brain region and frequency domain markers. A novel EEG artifact adjustment method for stress compensation was applied, and model performance re-evaluated. Results showed that acute stress response significantly affected EEG recordings and ML model accuracy, with compensation for acute stress improving model generalization. Findings underscore the need for rigorous health screening, artifact processing and stress response management during EEG recordings to ensure high-quality data for ML models.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109708"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of bio-plausible spiking neural networks for motor imagery recognition tasks 生物似是而非的脉冲神经网络用于运动图像识别任务的分析
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-02-04 DOI: 10.1016/j.bspc.2026.109500
Xiuqing Wang , Yunpeng Yang , Qingru Li , Xiaoya Ye , Yang An , Qiuting Li
Brain-computer interface (BCI) is one of the critical aspects of human–computer interaction (HCI). Motor imagery (MI) EEG-based BCI has great application potential for assisting disabled people in motor function reconstruction and restoration. To fulfill the given tasks based on MI-EEG signals successfully, accurately decoding MI-EEG signals is significant. Although deep learning (DL) can be used to analyze EEG signals, it still lacks interpretability. Compared with traditional artificial neural networks (ANNs), spiking neural networks (SNNs) use spiking neurons which are bio-plausible for the neurons in the brain to communicate with each other, and analyze EEG signals by spike trains with better bio-interpretability. Aims at providing an effective model with good stability to analyze EEG-based MI information, we propose a deep spiking convolutional neural network based on a self-attention (DSCNN-SA) mechanism for EEG-based MI recognition. The DSCNN-SA model is first used to extract the high-level spatio-temporal features of the EEG signal and transform the extracted features into spike trains as input to the deep spiking convolutional neural networks (DSCNNs), and analyze the EEG signal for MI according to the firing patterns of spiking neurons. The classification of 2-class motor imagery tasks is evaluated by the BCI Competition IV-2a and IV-2b datasets, and the average accuracy of the DSCNN-SA model is 85.53% and 81.52% respectively, which is better than that of the comparison models, such as KNN, MLP, DCNN-SA, ECCSP-TB2B, CNN-SAE, EEGNet, KLD and STNN, etc. The experimental results validate that the DSCNN-SA model is suitable for EEG-based MI recognition.
脑机接口(BCI)是人机交互(HCI)的一个重要方面。基于脑电信号的脑机接口在辅助残疾人运动功能重建和恢复方面具有很大的应用潜力。为了成功地完成基于MI-EEG信号的给定任务,对MI-EEG信号进行准确解码至关重要。虽然深度学习(DL)可以用于分析脑电图信号,但它仍然缺乏可解释性。与传统的人工神经网络(ann)相比,snn利用脑内神经元之间具有生物合理性的spike神经元相互通信,通过具有更好生物可解释性的spike序列分析脑电信号。为了提供一种稳定有效的模型来分析基于脑电图的心梗信息,我们提出了一种基于自注意机制的深度尖峰卷积神经网络(DSCNN-SA)用于基于脑电图的心梗识别。首先利用DSCNN-SA模型提取脑电信号的高层时空特征,并将提取的特征转化为尖峰序列,作为深度尖峰卷积神经网络(dscnn)的输入,根据尖峰神经元的放电模式对脑电信号进行MI分析。利用BCI Competition IV-2a和IV-2b数据集对2类运动图像任务的分类进行评估,DSCNN-SA模型的平均准确率分别为85.53%和81.52%,优于KNN、MLP、DCNN-SA、ECCSP-TB2B、CNN-SAE、EEGNet、KLD和STNN等比较模型。实验结果验证了DSCNN-SA模型适用于基于脑电图的脑梗死识别。
{"title":"Analysis of bio-plausible spiking neural networks for motor imagery recognition tasks","authors":"Xiuqing Wang ,&nbsp;Yunpeng Yang ,&nbsp;Qingru Li ,&nbsp;Xiaoya Ye ,&nbsp;Yang An ,&nbsp;Qiuting Li","doi":"10.1016/j.bspc.2026.109500","DOIUrl":"10.1016/j.bspc.2026.109500","url":null,"abstract":"<div><div>Brain-computer interface (BCI) is one of the critical aspects of human–computer interaction (HCI). Motor imagery (MI) EEG-based BCI has great application potential for assisting disabled people in motor function reconstruction and restoration. To fulfill the given tasks based on MI-EEG signals successfully, accurately decoding MI-EEG signals is significant. Although deep learning (DL) can be used to analyze EEG signals, it still lacks interpretability. Compared with traditional artificial neural networks (ANNs), spiking neural networks (SNNs) use spiking neurons which are bio-plausible for the neurons in the brain to communicate with each other, and analyze EEG signals by spike trains with better bio-interpretability. Aims at providing an effective model with good stability to analyze EEG-based MI information, we propose a deep spiking convolutional neural network based on a self-attention (DSCNN-SA) mechanism for EEG-based MI recognition. The DSCNN-SA model is first used to extract the high-level spatio-temporal features of the EEG signal and transform the extracted features into spike trains as input to the deep spiking convolutional neural networks (DSCNNs), and analyze the EEG signal for MI according to the firing patterns of spiking neurons. The classification of 2-class motor imagery tasks is evaluated by the BCI Competition IV-2a and IV-2b datasets, and the average accuracy of the DSCNN-SA model is 85.53% and 81.52% respectively, which is better than that of the comparison models, such as KNN, MLP, DCNN-SA, ECCSP-TB2B, CNN-SAE, EEGNet, KLD and STNN, etc. The experimental results validate that the DSCNN-SA model is suitable for EEG-based MI recognition.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109500"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of optical tweezers single-molecule force spectroscopy based on a signal-enhanced denoising model 基于信号增强去噪模型的光镊单分子力谱分析
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-02-04 DOI: 10.1016/j.bspc.2026.109761
Linyao Chen, Jingru Sun, Le Wang, Hao Huang, Yanghui Li
Optical Tweezers-based Single-Molecule Force Spectroscopy enables nanoscale investigation of biological molecules but is plagued by noise that interferes with Force-Distance Curves (FDCs). This study presents an automated analysis method comprising a Sliding Slice Denoiser (SSD) and an FDC Analysis Module. The SSD employs adaptive segmentation and a neural network integrated with Inception blocks and Self-Attention modules for denoising, then reconstructs high signal‑to‑noise ratios (SNR) FDCs. The module performs folding event quantification, site localization, and Worm-Like Chain fitting to extract biophysical parameters. Tests on single-fold Deoxyribonucleic Acid (DNA) hairpins show improved SNR, with the distance signal increasing from 21.8 dB to 53.6 dB and the force signal from 30.9 dB to 53.2 dB. Mean Absolute Errors of fold site are low, at approximately 0.097 pN for force and 0.73 nm for distance, with Coefficient of Determination exceeding 0.97. For 1 to 6 folds simulated FDCs, the overall fold count prediction accuracy reaches 99%.
基于光镊子的单分子力谱技术使生物分子的纳米级研究成为可能,但它受到干扰力-距离曲线(fdc)的噪声的困扰。本研究提出一种自动分析方法,包括滑动切片去噪(SSD)和FDC分析模块。SSD采用自适应分割和集成了Inception模块和自关注模块的神经网络进行去噪,然后重建高信噪比(SNR) fdc。该模块执行折叠事件量化、位点定位和蠕虫样链拟合以提取生物物理参数。单次DNA发夹测试表明,距离信号从21.8 dB增加到53.6 dB,力信号从30.9 dB增加到53.2 dB,信噪比有所提高。折叠位置的平均绝对误差较低,力的平均绝对误差约为0.097 pN,距离的平均绝对误差约为0.73 nm,决定系数超过0.97。对于1 ~ 6次的模拟fdc,总体的折叠数预测精度达到99%。
{"title":"Analysis of optical tweezers single-molecule force spectroscopy based on a signal-enhanced denoising model","authors":"Linyao Chen,&nbsp;Jingru Sun,&nbsp;Le Wang,&nbsp;Hao Huang,&nbsp;Yanghui Li","doi":"10.1016/j.bspc.2026.109761","DOIUrl":"10.1016/j.bspc.2026.109761","url":null,"abstract":"<div><div>Optical Tweezers-based Single-Molecule Force Spectroscopy enables nanoscale investigation of biological molecules but is plagued by noise that interferes with Force-Distance Curves (FDCs). This study presents an automated analysis method comprising a Sliding Slice Denoiser (SSD) and an FDC Analysis Module. The SSD employs adaptive segmentation and a neural network integrated with Inception blocks and Self-Attention modules for denoising, then reconstructs high signal‑to‑noise ratios (SNR) FDCs. The module performs folding event quantification, site localization, and Worm-Like Chain fitting to extract biophysical parameters. Tests on single-fold Deoxyribonucleic Acid (DNA) hairpins show improved SNR, with the distance signal increasing from 21.8 dB to 53.6 dB and the force signal from 30.9 dB to 53.2 dB. Mean Absolute Errors of fold site are low, at approximately 0.097 pN for force and 0.73 nm for distance, with Coefficient of Determination exceeding 0.97. For 1 to 6 folds simulated FDCs, the overall fold count prediction accuracy reaches 99%.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109761"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MultiScaleSegNet: A novel framework for multi-modal brain tumor segmentation MultiScaleSegNet:一种新的多模态脑肿瘤分割框架
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-02-04 DOI: 10.1016/j.bspc.2026.109786
Syed Fakhar Bilal , Jianqiang Li , Jun Qian , Saqib Ali , Muhammad Arif , Baolin Zhu , Lijun Zhao
Accurate segmentation of brain tumors from multi-modal MRI is critical for diagnosis and treatment, but remains challenging due to heterogeneous tumor morphology, ambiguous boundaries, and the need to integrate both local details and global context. To address these challenges, we propose MultiScaleSegNet, a novel encoder–decoder framework that synergistically integrates a Swin Transformer encoder with a DenseNet-based decoder. Our model introduces three key components: (1) a Dual-Path Attention mechanism for feature extraction (DPA-MFE) that preserves spatial details while modeling long-range dependencies; (2) an Attention-based Feature Enhanced Network (AFENet) at the bottleneck to recalibrate features channel-wise and spatially; and (3) a Cross-Feature Refinement (CFR) block that expands the receptive field using dilated convolutions. The decoder further leverages CFR-refined skip connections to recover precise boundary information. Trained with a hybrid BCE-Dice loss on BraTS 2020 and BraTS 2021 datasets, our model achieves state-of-the-art performance, with average Dice scores of 0.972 and 0.987, respectively. Extensive experiments, including ablation studies and comparisons with existing methods, demonstrate that MultiScaleSegNet provides a robust and accurate solution for brain tumor segmentation, offering a strong foundation for clinical applications.
从多模态MRI中准确分割脑肿瘤对诊断和治疗至关重要,但由于肿瘤形态异质性、边界模糊以及需要整合局部细节和全局背景,仍然具有挑战性。为了应对这些挑战,我们提出了MultiScaleSegNet,这是一种新颖的编码器-解码器框架,它协同集成了Swin Transformer编码器和基于densenet的解码器。我们的模型引入了三个关键组件:(1)用于特征提取的双路径注意机制(DPA-MFE),在建模远程依赖关系的同时保留空间细节;(2)在瓶颈处使用基于注意力的特征增强网络(AFENet)对特征进行通道和空间上的重新校准;(3)使用扩张卷积扩展接受野的交叉特征细化(CFR)块。解码器进一步利用cfr精炼的跳过连接来恢复精确的边界信息。在BraTS 2020和BraTS 2021数据集上使用混合BCE-Dice损失进行训练,我们的模型达到了最先进的性能,平均Dice得分分别为0.972和0.987。广泛的实验,包括消融研究和与现有方法的比较,表明MultiScaleSegNet为脑肿瘤分割提供了一个强大而准确的解决方案,为临床应用提供了坚实的基础。
{"title":"MultiScaleSegNet: A novel framework for multi-modal brain tumor segmentation","authors":"Syed Fakhar Bilal ,&nbsp;Jianqiang Li ,&nbsp;Jun Qian ,&nbsp;Saqib Ali ,&nbsp;Muhammad Arif ,&nbsp;Baolin Zhu ,&nbsp;Lijun Zhao","doi":"10.1016/j.bspc.2026.109786","DOIUrl":"10.1016/j.bspc.2026.109786","url":null,"abstract":"<div><div>Accurate segmentation of brain tumors from multi-modal MRI is critical for diagnosis and treatment, but remains challenging due to heterogeneous tumor morphology, ambiguous boundaries, and the need to integrate both local details and global context. To address these challenges, we propose MultiScaleSegNet, a novel encoder–decoder framework that synergistically integrates a Swin Transformer encoder with a DenseNet-based decoder. Our model introduces three key components: (1) a Dual-Path Attention mechanism for feature extraction (DPA-MFE) that preserves spatial details while modeling long-range dependencies; (2) an Attention-based Feature Enhanced Network (AFENet) at the bottleneck to recalibrate features channel-wise and spatially; and (3) a Cross-Feature Refinement (CFR) block that expands the receptive field using dilated convolutions. The decoder further leverages CFR-refined skip connections to recover precise boundary information. Trained with a hybrid BCE-Dice loss on BraTS 2020 and BraTS 2021 datasets, our model achieves state-of-the-art performance, with average Dice scores of 0.972 and 0.987, respectively. Extensive experiments, including ablation studies and comparisons with existing methods, demonstrate that MultiScaleSegNet provides a robust and accurate solution for brain tumor segmentation, offering a strong foundation for clinical applications.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109786"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DM-FNet: Deep Multi-scale Fusion Attention Network for white blood cell classification DM-FNet:用于白细胞分类的深度多尺度融合注意网络
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-02-02 DOI: 10.1016/j.bspc.2026.109666
Amit Kumar , Sandesh Aryal , Sandeep Madarapu , Mohammad Iman Junaid , Samit Ari
The precise identification and categorization of white blood cells (WBCs) is crucial for diagnosing blood disorders such as leukopenia and neutropenia. Current manual analysis techniques that utilize microscopic blood smears are labor-intensive and susceptible to human error. Accurate classification of WBCs is challenging due to several dataset-related challenges when using CNNs. High intra-class variability and class overlap make distinguishing between different cell types complex; data quality issues like noisy or blurry images and variability in imaging conditions further complicate training. To overcome these challenges, an advanced deep learning architecture, Deep Multi-scale Fusion Attention Network (DM-FNet), is proposed to enhance automated WBC classification performance. Our methodology incorporates a novel Dilated Kernel Convolutional Attention Block (DKCAB) that adeptly manages cell overlap and partial visibility through multi-scale dilated convolutions and a Context Attention Block (CAB) that improves classification accuracy by concentrating on diagnostically significant cellular features. Through comprehensive experimentation on standard PBC and Raabin datasets, our model exhibits superior performance relative to existing techniques. The ablation studies indicate that the DKCAB component increases classification accuracy by 9.01 percentage points. At the same time, the overall system achieves outstanding performance metrics: 99.5% on the PCB dataset and 98.54% on the Raabin dataset.
白细胞(wbc)的准确识别和分类对于诊断白细胞减少症和中性粒细胞减少症等血液疾病至关重要。目前使用显微血液涂片的人工分析技术是劳动密集型的,容易出现人为错误。当使用cnn时,由于几个与数据集相关的挑战,wbc的准确分类是具有挑战性的。高度的类内变异和类重叠使得区分不同类型的细胞变得复杂;数据质量问题,如图像噪声或模糊以及成像条件的可变性,进一步使训练复杂化。为了克服这些挑战,提出了一种先进的深度学习架构——深度多尺度融合注意网络(DM-FNet),以提高WBC的自动分类性能。我们的方法结合了一种新型的扩展核卷积注意块(DKCAB),它通过多尺度扩展卷积熟练地管理细胞重叠和部分可见性,以及一种上下文注意块(CAB),它通过专注于诊断上重要的细胞特征来提高分类准确性。通过在标准PBC和Raabin数据集上的综合实验,我们的模型表现出相对于现有技术的优越性能。消融研究表明,DKCAB成分使分类准确率提高了9.01个百分点。与此同时,整个系统达到了出色的性能指标:PCB数据集的性能为99.5%,Raabin数据集的性能为98.54%。
{"title":"DM-FNet: Deep Multi-scale Fusion Attention Network for white blood cell classification","authors":"Amit Kumar ,&nbsp;Sandesh Aryal ,&nbsp;Sandeep Madarapu ,&nbsp;Mohammad Iman Junaid ,&nbsp;Samit Ari","doi":"10.1016/j.bspc.2026.109666","DOIUrl":"10.1016/j.bspc.2026.109666","url":null,"abstract":"<div><div>The precise identification and categorization of white blood cells (WBCs) is crucial for diagnosing blood disorders such as leukopenia and neutropenia. Current manual analysis techniques that utilize microscopic blood smears are labor-intensive and susceptible to human error. Accurate classification of WBCs is challenging due to several dataset-related challenges when using CNNs. High intra-class variability and class overlap make distinguishing between different cell types complex; data quality issues like noisy or blurry images and variability in imaging conditions further complicate training. To overcome these challenges, an advanced deep learning architecture, Deep Multi-scale Fusion Attention Network (DM-FNet), is proposed to enhance automated WBC classification performance. Our methodology incorporates a novel Dilated Kernel Convolutional Attention Block (DKCAB) that adeptly manages cell overlap and partial visibility through multi-scale dilated convolutions and a Context Attention Block (CAB) that improves classification accuracy by concentrating on diagnostically significant cellular features. Through comprehensive experimentation on standard PBC and Raabin datasets, our model exhibits superior performance relative to existing techniques. The ablation studies indicate that the DKCAB component increases classification accuracy by 9.01 percentage points. At the same time, the overall system achieves outstanding performance metrics: 99.5% on the PCB dataset and 98.54% on the Raabin dataset.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109666"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AXNet: Attention-enhanced X-ray network for pneumonia detection AXNet:用于肺炎检测的注意力增强x射线网络
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2026-06-01 Epub Date: 2026-01-30 DOI: 10.1016/j.bspc.2026.109618
Mojtaba Jahanian , Abbas Karimi , Nafiseh Osati Eraghi , Faraneh Zarafshan

Background:

Pneumonia remains one of the leading causes of childhood mortality worldwide, especially in low-resource clinical settings where access to expert radiologists is limited. Automated and interpretable deep learning models can provide rapid and reliable diagnostic support.

Objective:

This study introduces AXNet+ECA, a lightweight attention-augmented convolutional neural network, designed to improve pneumonia detection from pediatric chest X-ray (CXR) images while ensuring computational efficiency and interpretability. The novelty of AXNet+ECA lies in the dual-attention integration of Convolutional Block Attention Module (CBAM) and Efficient Channel Attention (ECA) mechanisms within a lightweight backbone, jointly enhancing diagnostic accuracy and model interpretability while maintaining computational frugality.

Methods:

The proposed model builds upon the ResNet-18 backbone by embedding CBAM blocks within each residual stage and appending an ECA head for fine-grained channel calibration. AXNet+ECA was trained and evaluated on 5863 pediatric chest X-ray images from the publicly available Kaggle pneumonia dataset, using an 80–10–10 train/validation/test split. Evaluation encompassed baseline comparisons, ablation studies, robustness analysis, and statistical significance testing.

Results:

AXNet+ECA achieved a test accuracy of 93.6%, F1-score of 93.1%, and AUC of 0.964, outperforming or matching CNN baselines (ResNet-18, DenseNet-121, VGG-16, CheXNet) and recent transformer-based models (ViT-B/16, Swin-T). Despite competitive performance, AXNet+ECA requires only 13.1M parameters and 4.7 ms/image inference time, highlighting its computational efficiency. Visual interpretability via CBAM and Grad-CAM revealed 86.7% alignment with radiologist-annotated abnormalities.

Conclusion:

By integrating dual-path attention within a compact architecture, AXNet+ECA achieves an effective balance between diagnostic accuracy, interpretability, and efficiency. These characteristics underline its potential for real-time clinical deployment in resource-constrained healthcare environments and large-scale screening initiatives.
背景:肺炎仍然是世界范围内儿童死亡的主要原因之一,特别是在资源匮乏的临床环境中,获得放射科专家的机会有限。自动化和可解释的深度学习模型可以提供快速可靠的诊断支持。目的:本研究介绍了AXNet+ECA,一种轻量级的注意力增强卷积神经网络,旨在提高儿童胸部x射线(CXR)图像的肺炎检测,同时确保计算效率和可解释性。AXNet+ECA的新颖之处在于在轻量级主干内集成了卷积块注意模块(CBAM)和高效通道注意(ECA)机制的双注意,在保持计算节约的同时,共同提高了诊断准确性和模型可解释性。方法:提出的模型建立在ResNet-18主干上,通过在每个剩余阶段嵌入CBAM块并附加ECA头进行细粒度通道校准。AXNet+ECA对来自公开可用的Kaggle肺炎数据集的5863张儿科胸部x射线图像进行训练和评估,采用80-10-10训练/验证/测试分割。评估包括基线比较、消融研究、稳健性分析和统计显著性检验。结果:AXNet+ECA的测试准确率为93.6%,f1得分为93.1%,AUC为0.964,优于或匹配CNN基线(ResNet-18、DenseNet-121、VGG-16、CheXNet)和最新的基于变压器的模型(vitb /16、swun - t)。尽管具有相当的性能,AXNet+ECA只需要131m参数和4.7 ms/图像推理时间,突出了其计算效率。通过CBAM和Grad-CAM的视觉可解释性显示86.7%的异常与放射科医生注释的异常一致。结论:通过在紧凑的体系结构中集成双路径关注,AXNet+ECA在诊断准确性、可解释性和效率之间实现了有效的平衡。这些特点强调了它在资源有限的医疗环境和大规模筛查活动中进行实时临床部署的潜力。
{"title":"AXNet: Attention-enhanced X-ray network for pneumonia detection","authors":"Mojtaba Jahanian ,&nbsp;Abbas Karimi ,&nbsp;Nafiseh Osati Eraghi ,&nbsp;Faraneh Zarafshan","doi":"10.1016/j.bspc.2026.109618","DOIUrl":"10.1016/j.bspc.2026.109618","url":null,"abstract":"<div><h3>Background:</h3><div>Pneumonia remains one of the leading causes of childhood mortality worldwide, especially in low-resource clinical settings where access to expert radiologists is limited. Automated and interpretable deep learning models can provide rapid and reliable diagnostic support.</div></div><div><h3>Objective:</h3><div>This study introduces <strong>AXNet+ECA</strong>, a lightweight attention-augmented convolutional neural network, designed to improve pneumonia detection from pediatric chest X-ray (CXR) images while ensuring computational efficiency and interpretability. The novelty of AXNet+ECA lies in the dual-attention integration of <em>Convolutional Block Attention Module (CBAM)</em> and <em>Efficient Channel Attention (ECA)</em> mechanisms within a lightweight backbone, jointly enhancing diagnostic accuracy and model interpretability while maintaining computational frugality.</div></div><div><h3>Methods:</h3><div>The proposed model builds upon the ResNet-18 backbone by embedding CBAM blocks within each residual stage and appending an ECA head for fine-grained channel calibration. AXNet+ECA was trained and evaluated on 5863 pediatric chest X-ray images from the publicly available Kaggle pneumonia dataset, using an 80–10–10 train/validation/test split. Evaluation encompassed baseline comparisons, ablation studies, robustness analysis, and statistical significance testing.</div></div><div><h3>Results:</h3><div>AXNet+ECA achieved a test accuracy of <strong>93.6%</strong>, F1-score of <strong>93.1%</strong>, and AUC of <strong>0.964</strong>, outperforming or matching CNN baselines (ResNet-18, DenseNet-121, VGG-16, CheXNet) and recent transformer-based models (ViT-B/16, Swin-T). Despite competitive performance, AXNet+ECA requires only <strong>13.1M parameters</strong> and <strong>4.7 ms/image</strong> inference time, highlighting its computational efficiency. Visual interpretability via CBAM and Grad-CAM revealed <strong>86.7%</strong> alignment with radiologist-annotated abnormalities.</div></div><div><h3>Conclusion:</h3><div>By integrating dual-path attention within a compact architecture, AXNet+ECA achieves an effective balance between diagnostic accuracy, interpretability, and efficiency. These characteristics underline its potential for real-time clinical deployment in resource-constrained healthcare environments and large-scale screening initiatives.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"118 ","pages":"Article 109618"},"PeriodicalIF":4.9,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomedical Signal Processing and Control
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1