首页 > 最新文献

Biomedical Signal Processing and Control最新文献

英文 中文
Artificial intelligence-based multiclass diabetes risk stratification for big data embedded with explainability: From machine learning to attention models
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-17 DOI: 10.1016/j.bspc.2025.107672
Ekta Tiwari , Siddharth Gupta , Anudeep Pavulla , Mustafa Al-Maini , Rajesh Singh , Esma R. Isenovic , Sumit Chaudhary , John L. Laird , Laura Mantella , Amer M. Johri , Luca Saba , Jasjit S. Suri

Background

Globally, diabetes mellitus is a major health challenge with high morbidity and significant costs. Traditional methods rely on invasive biomarkers like glycated hemoglobin and lack consistency, necessitating more robust approaches.

Methodology

This study uses attention-based deep learning for enhanced diabetes risk stratification. We focus on exploring recurrent neural networks with attention mechanisms. We used K-fold (K = 5) cross-validation and implemented 14 models for robustness. Further, we integrate an explainability paradigm by validating model outputs through reliability-focused statistical tests. Finally, we present the training time comparison between different hardware.

Results

The attention-based models employed demonstrated superior performance in handling multi-dimensional data, resulting in highly accurate diabetes risk stratification predictions. We went on to evaluate these models and benchmarked them against classical methods, proving significant improvements over traditional ones with metrics such as the area under the curve scores reaching 0.99 for attention models. The percentage improvement over non attention-based models was 3.67%. Also, the models were able to show generalization at 60% of training data.

Conclusion

The attention-based models employed in this study substantially enhance diabetes risk stratification, offering a promising tool for healthcare professionals. They allow for early and precise detection of diabetes risk stratification, thereby potentially improving patient outcomes through timely and tailored interventions. This research underscores the potential of sophisticated deep learning models in transforming the landscape of chronic disease management.
{"title":"Artificial intelligence-based multiclass diabetes risk stratification for big data embedded with explainability: From machine learning to attention models","authors":"Ekta Tiwari ,&nbsp;Siddharth Gupta ,&nbsp;Anudeep Pavulla ,&nbsp;Mustafa Al-Maini ,&nbsp;Rajesh Singh ,&nbsp;Esma R. Isenovic ,&nbsp;Sumit Chaudhary ,&nbsp;John L. Laird ,&nbsp;Laura Mantella ,&nbsp;Amer M. Johri ,&nbsp;Luca Saba ,&nbsp;Jasjit S. Suri","doi":"10.1016/j.bspc.2025.107672","DOIUrl":"10.1016/j.bspc.2025.107672","url":null,"abstract":"<div><h3>Background</h3><div>Globally, diabetes mellitus is a major health challenge with high morbidity and significant costs. Traditional methods rely on invasive biomarkers like glycated hemoglobin and lack consistency, necessitating more robust approaches.</div></div><div><h3>Methodology</h3><div>This study uses attention-based deep learning for enhanced diabetes risk stratification. We focus on exploring recurrent neural networks with attention mechanisms. We used K-fold (K = 5) cross-validation and implemented 14 models for robustness. Further, we integrate an explainability paradigm by validating model outputs through reliability-focused statistical tests. Finally, we present the training time comparison between different hardware.</div></div><div><h3>Results</h3><div>The attention-based models employed demonstrated superior performance in handling multi-dimensional data, resulting in highly accurate diabetes risk stratification predictions. We went on to evaluate these models and benchmarked them against classical methods, proving significant improvements over traditional ones with metrics such as the area under the curve scores reaching 0.99 for attention models. The percentage improvement over non attention-based models was 3.67%. Also, the models were able to show generalization at 60% of training data.</div></div><div><h3>Conclusion</h3><div>The attention-based models employed in this study substantially enhance diabetes risk stratification, offering a promising tool for healthcare professionals. They allow for early and precise detection of diabetes risk stratification, thereby potentially improving patient outcomes through timely and tailored interventions. This research underscores the potential of sophisticated deep learning models in transforming the landscape of chronic disease management.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107672"},"PeriodicalIF":4.9,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143429094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAGCN: Self-adaptive Graph Convolutional Network for pneumonia detection
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-17 DOI: 10.1016/j.bspc.2025.107634
Junding Sun , Jianxiang Xue , Zhaozhao Xu , Ningshu Li , Chaosheng Tang , Lei Zhao , Bin Pu , Yudong Zhang
Pneumonia, due to its high incidence and potential lethality, necessitates rapid and accurate diagnostic methods. Chest X-rays and CT scans are pivotal tools in pneumonia diagnosis. While traditional image analysis techniques heavily depend on the expertise of radiologists, they result in subjectivity and inconsistency. Moreover, these techniques exhibit inefficiency when processing large datasets. Deep learning techniques, especially Convolutional Neural Networks (CNNs), have made significant advances in the field of medical image analysis, improving the accuracy and efficiency of pneumonia detection. However, CNNs face challenges in processing lung images with irregular shape and distribution, and mainly extract local features, with limited performance for global structural information and lesion correlation. Graph Convolutional Networks (GCNs) successfully extend the convolution operation from regular grid data to irregular graph data by using adjacency matrix and node features, and better capture the global correlation in irregular image structures. To address the limitations of the traditional message passing mechanism of GCN, we propose a novel k-hop graph construction algorithm that minimizes the introduction of redundant connections in higher-order graphs. We also introduce the Self-Adaptive Graph Convolutional Network (SAGCN), which incorporates an innovative graph convolution method that aggregates information across various hop distances. This method allows the adjustment of the aggregation range by varying the hop k value. Additionally, we integrate a graph attention mechanism to mitigate the impacts of higher-order graph alterations on node connectivity. Moreover, our Node Adaptive Range Fusion (NARF) module enables effective multi-hop feature fusion and eliminates the issues associated with non-interactive nodes. We evaluated the SAGCN on two public pneumatic datasets, where it demonstrated superior performance with accuracies of 98.34% and 97.22%, respectively. These results significantly surpass several state-of-the-art methods, confirming the efficacy of SAGCN in pneumonia detection.
{"title":"SAGCN: Self-adaptive Graph Convolutional Network for pneumonia detection","authors":"Junding Sun ,&nbsp;Jianxiang Xue ,&nbsp;Zhaozhao Xu ,&nbsp;Ningshu Li ,&nbsp;Chaosheng Tang ,&nbsp;Lei Zhao ,&nbsp;Bin Pu ,&nbsp;Yudong Zhang","doi":"10.1016/j.bspc.2025.107634","DOIUrl":"10.1016/j.bspc.2025.107634","url":null,"abstract":"<div><div>Pneumonia, due to its high incidence and potential lethality, necessitates rapid and accurate diagnostic methods. Chest X-rays and CT scans are pivotal tools in pneumonia diagnosis. While traditional image analysis techniques heavily depend on the expertise of radiologists, they result in subjectivity and inconsistency. Moreover, these techniques exhibit inefficiency when processing large datasets. Deep learning techniques, especially Convolutional Neural Networks (CNNs), have made significant advances in the field of medical image analysis, improving the accuracy and efficiency of pneumonia detection. However, CNNs face challenges in processing lung images with irregular shape and distribution, and mainly extract local features, with limited performance for global structural information and lesion correlation. Graph Convolutional Networks (GCNs) successfully extend the convolution operation from regular grid data to irregular graph data by using adjacency matrix and node features, and better capture the global correlation in irregular image structures. To address the limitations of the traditional message passing mechanism of GCN, we propose a novel <span><math><mi>k</mi></math></span>-hop graph construction algorithm that minimizes the introduction of redundant connections in higher-order graphs. We also introduce the Self-Adaptive Graph Convolutional Network (SAGCN), which incorporates an innovative graph convolution method that aggregates information across various hop distances. This method allows the adjustment of the aggregation range by varying the hop <span><math><mi>k</mi></math></span> value. Additionally, we integrate a graph attention mechanism to mitigate the impacts of higher-order graph alterations on node connectivity. Moreover, our Node Adaptive Range Fusion (NARF) module enables effective multi-hop feature fusion and eliminates the issues associated with non-interactive nodes. We evaluated the SAGCN on two public pneumatic datasets, where it demonstrated superior performance with accuracies of 98.34% and 97.22%, respectively. These results significantly surpass several state-of-the-art methods, confirming the efficacy of SAGCN in pneumonia detection.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107634"},"PeriodicalIF":4.9,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143429093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new morphological classification of keratoconus using few-shot learning in candidates for intrastromal corneal ring implants
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-17 DOI: 10.1016/j.bspc.2025.107664
Zhila Agharezaei , Mohammad Shirshekar , Reza Firouzi , Samira Hassanzadeh , Siamak Zarei-Ghanavati , Kambiz Bahaadinbeigy , Amin Golabpour , Laleh Agharezaei , Amin Amiri Tehranizadeh , Amir Hossein Taherinia , Mohammadreza Hoseinkhani , Reyhaneh Akbarzadeh , Mohammad Reza Sedaghat , Saeid Eslami
In the field of ophthalmology, the accurate classification of different types of keratoconus (KCN) is vital for effective surgical planning and the successful implantation of intracorneal ring segments (ICRS). During the diagnostic process, ophthalmologists are required to review demographic and clinical ophthalmic examinations to make an accurate diagnosis. This process can be time-consuming and prone to errors. This research conducted a comprehensive study on the diagnosis and treatment of different types of KCN using a novel approach that employed a few-shot learning (FSL) technique with deep learning models based on corneal topography images and the Keraring nomogram. The retrospective cross-sectional study included 268 corneal images from 175 patients who underwent keraring segments implantation and were enrolled between May 2020 and September 2022. We developed multiple transfer learning techniques and a prototypical network to identify and classify corneal disorders. The study achieved high accuracy rates ranging from 88% for AlexNet to 98% for MobileNet-V3 and GoogLeNet, and AUC values ranging from 0.96 for VGG16 to 0.99 for MNASNet, EfficientNet-V2, and GoogLeNet to classify different corneal types of KCN. The results demonstrated the potential of FSL in addressing the challenge of limited medical image datasets, providing reliable performance in accurately categorizing different types of KCN and improving surgical decision-making. Our application provided the detection of KCN patterns and proposed personalized, fully automated surgical planning for each patient, thus supplanting the former manual calculations.
{"title":"A new morphological classification of keratoconus using few-shot learning in candidates for intrastromal corneal ring implants","authors":"Zhila Agharezaei ,&nbsp;Mohammad Shirshekar ,&nbsp;Reza Firouzi ,&nbsp;Samira Hassanzadeh ,&nbsp;Siamak Zarei-Ghanavati ,&nbsp;Kambiz Bahaadinbeigy ,&nbsp;Amin Golabpour ,&nbsp;Laleh Agharezaei ,&nbsp;Amin Amiri Tehranizadeh ,&nbsp;Amir Hossein Taherinia ,&nbsp;Mohammadreza Hoseinkhani ,&nbsp;Reyhaneh Akbarzadeh ,&nbsp;Mohammad Reza Sedaghat ,&nbsp;Saeid Eslami","doi":"10.1016/j.bspc.2025.107664","DOIUrl":"10.1016/j.bspc.2025.107664","url":null,"abstract":"<div><div>In the field of ophthalmology, the accurate classification of different types of keratoconus (KCN) is vital for effective surgical planning and the successful implantation of intracorneal ring segments (ICRS). During the diagnostic process, ophthalmologists are required to review demographic and clinical ophthalmic examinations to make an accurate diagnosis. This process can be time-consuming and prone to errors. This research conducted a comprehensive study on the diagnosis and treatment of different types of KCN using a novel approach that employed a few-shot learning (FSL) technique with deep learning models based on corneal topography images and the Keraring nomogram. The retrospective cross-sectional study included 268 corneal images from 175 patients who underwent keraring segments implantation and were enrolled between May 2020 and September 2022. We developed multiple transfer learning techniques and a prototypical network to identify and classify corneal disorders. The study achieved high accuracy rates ranging from 88% for AlexNet to 98% for MobileNet-V3 and GoogLeNet, and AUC values ranging from 0.96 for VGG16 to 0.99 for MNASNet, EfficientNet-V2, and GoogLeNet to classify different corneal types of KCN. The results demonstrated the potential of FSL in addressing the challenge of limited medical image datasets, providing reliable performance in accurately categorizing different types of KCN and improving surgical decision-making. Our application provided the detection of KCN patterns and proposed personalized, fully automated surgical planning for each patient, thus supplanting the former manual calculations.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107664"},"PeriodicalIF":4.9,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143419861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Calibrated multi-view graph learning framework for infant cognitive abilities prediction
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-17 DOI: 10.1016/j.bspc.2025.107605
Tong Xiong , Xin Zhang , Jiale Cheng , Xiangmin Xu , Gang Li
Early prediction of cognitive development holds significant importance in neonatal healthcare, especially given the high incidence of cognitive deficits or developmental delays in preterm infants. Previous advances have already investigated the interior relation between brain cortical morphology and cognitive skills, leveraging this connection for prognostication. However, the small proportion of subjects with cognitive deficits in the cohort limits the predictive power of previous models, i.e., the data imbalance issue. To tackle this challenge, in this paper, we present the Calibrated Multi-view Graph Learning (CMGL) framework for cognition score prediction, a cortical graph learning model with capabilities for the imbalanced regression scenario. In order to collaboratively capture the morphological relations among brain regions, a multi-view cortical graph is constructed based on cortex developmental correlation and adaptive morphology similarity. On top of this graph, we train a diffusion graph convolutional backbone to obtain the cortical graph representation. Considering the data imbalance challenge, we propose a feature clustering module to calibrate the learned feature space, reducing training bias towards dominant classes. Moreover, we introduce smoothed reweighted mean absolute error loss based on label distribution smoothing to guide the training process in continuous imbalanced scenario. In the cross-validation experiment on our in-house dataset, the proposed CMGL achieves a mean square error of 0.1596, demonstrating state-of-the-art performance compared to other related methods.
{"title":"Calibrated multi-view graph learning framework for infant cognitive abilities prediction","authors":"Tong Xiong ,&nbsp;Xin Zhang ,&nbsp;Jiale Cheng ,&nbsp;Xiangmin Xu ,&nbsp;Gang Li","doi":"10.1016/j.bspc.2025.107605","DOIUrl":"10.1016/j.bspc.2025.107605","url":null,"abstract":"<div><div>Early prediction of cognitive development holds significant importance in neonatal healthcare, especially given the high incidence of cognitive deficits or developmental delays in preterm infants. Previous advances have already investigated the interior relation between brain cortical morphology and cognitive skills, leveraging this connection for prognostication. However, the small proportion of subjects with cognitive deficits in the cohort limits the predictive power of previous models, i.e., the data imbalance issue. To tackle this challenge, in this paper, we present the Calibrated Multi-view Graph Learning (CMGL) framework for cognition score prediction, a cortical graph learning model with capabilities for the imbalanced regression scenario. In order to collaboratively capture the morphological relations among brain regions, a multi-view cortical graph is constructed based on cortex developmental correlation and adaptive morphology similarity. On top of this graph, we train a diffusion graph convolutional backbone to obtain the cortical graph representation. Considering the data imbalance challenge, we propose a feature clustering module to calibrate the learned feature space, reducing training bias towards dominant classes. Moreover, we introduce smoothed reweighted mean absolute error loss based on label distribution smoothing to guide the training process in continuous imbalanced scenario. In the cross-validation experiment on our in-house dataset, the proposed CMGL achieves a mean square error of 0.1596, demonstrating state-of-the-art performance compared to other related methods.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107605"},"PeriodicalIF":4.9,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143419931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of breath cycles in pediatric lung sounds via an object detection-based transfer learning method
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-16 DOI: 10.1016/j.bspc.2025.107693
Sa-Yoon Park , Ji Soo Park , Jisoo Lee , Hyesu Lee , Yelin Kim , Dong In Suh , Kwangsoo Kim
Auscultation is critical for assessing the respiratory system in children; however, the lack of pediatric lung sound databases impedes the development of automated analysis tools. This study introduces an object detection-based transfer learning method to accurately predict breath cycles in pediatric lung sounds. We utilized a model based on the YOLOv1 architecture, initially pre-trained on an adult lung sound dataset (HF_Lung_v1) and subsequently fine-tuned on a pediatric dataset (SNUCH_Lung). The input feature was the log Mel spectrogram, which effectively captured the relevant frequency and temporal information. The pre-trained model achieved an F1 score of 0.900 ± 0.003 on the HF_Lung_v1 dataset. After fine-tuning, it reached an F1 score of 0.824 ± 0.009 on the SNUCH_Lung dataset, confirming the efficacy of transfer learning. This model surpassed the performance of a baseline model trained solely on the SNUCH_Lung dataset without transfer learning. We also explored the impact of segment length, width, and various audio feature extraction techniques; the optimal results were obtained with 15 s segments, a 2-second width, and the log Mel spectrogram. The model is promising for clinical applications, such as generating large-scale annotated datasets, visualizing and labeling individual breath cycles, and performing correlation analysis with physiological indicators. Future research will focus on expanding the pediatric lung sound database through auto-labeling techniques and integrating the model into stethoscopes for real-time analysis. This study highlights the potential of object detection-based transfer learning in enhancing the accuracy of breath cycle prediction in pediatric lung sounds and advancing pediatric respiratory sound analysis tools.
{"title":"Detection of breath cycles in pediatric lung sounds via an object detection-based transfer learning method","authors":"Sa-Yoon Park ,&nbsp;Ji Soo Park ,&nbsp;Jisoo Lee ,&nbsp;Hyesu Lee ,&nbsp;Yelin Kim ,&nbsp;Dong In Suh ,&nbsp;Kwangsoo Kim","doi":"10.1016/j.bspc.2025.107693","DOIUrl":"10.1016/j.bspc.2025.107693","url":null,"abstract":"<div><div>Auscultation is critical for assessing the respiratory system in children; however, the lack of pediatric lung sound databases impedes the development of automated analysis tools. This study introduces an object detection-based transfer learning method to accurately predict breath cycles in pediatric lung sounds. We utilized a model based on the YOLOv1 architecture, initially pre-trained on an adult lung sound dataset (HF_Lung_v1) and subsequently fine-tuned on a pediatric dataset (SNUCH_Lung). The input feature was the log Mel spectrogram, which effectively captured the relevant frequency and temporal information. The pre-trained model achieved an F1 score of 0.900 ± 0.003 on the HF_Lung_v1 dataset. After fine-tuning, it reached an F1 score of 0.824 ± 0.009 on the SNUCH_Lung dataset, confirming the efficacy of transfer learning. This model surpassed the performance of a baseline model trained solely on the SNUCH_Lung dataset without transfer learning. We also explored the impact of segment length, width, and various audio feature extraction techniques; the optimal results were obtained with 15 s segments, a 2-second width, and the log Mel spectrogram. The model is promising for clinical applications, such as generating large-scale annotated datasets, visualizing and labeling individual breath cycles, and performing correlation analysis with physiological indicators. Future research will focus on expanding the pediatric lung sound database through auto-labeling techniques and integrating the model into stethoscopes for real-time analysis. This study highlights the potential of object detection-based transfer learning in enhancing the accuracy of breath cycle prediction in pediatric lung sounds and advancing pediatric respiratory sound analysis tools.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107693"},"PeriodicalIF":4.9,"publicationDate":"2025-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143419957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrastive reinforced transfer learning for EEG-based emotion recognition with consideration of individual differences
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-15 DOI: 10.1016/j.bspc.2025.107622
Zhibang Zang , Xiangkun Yu , Baole Fu , Yinhua Liu , Shuzhi Sam Ge
Electroencephalography (EEG) data exhibit significant individual differences, posing challenges in generalizing emotion recognition models across individuals. Transfer learning (TL) can leverage knowledge obtained from other individuals to help models better account for these individual differences. Existing research predominantly focuses on minimizing negative transfer, while this paper aims to mitigate negative transfer and maximize positive transfer. A fundamental deep transfer learning system was examined to achieve maximum positive transfer. An optimal strategy was formulated to choose the most adaptable knowledge, specifically EEG features, from the source domain to accommodate target individual differences. To achieve this optimal strategy, a reinforcement learning algorithm, specifically the Q-learning algorithm, was applied to extract the most beneficial knowledge from the source domain by the Control Agent (CA). Meanwhile, to more accurately assess the effectiveness of actions in reinforcement learning, a direct comparison between the Reinforced Transfer Learning (RTL) and TL methods is conducted during the learning process, and the Contrastive Reinforced Transfer Learning (CRTL) method is finally proposed. The proposed CRTL model achieved average recognition accuracies of 91.26% (valence) and 90.43% (arousal) on the Database for Emotion Analysis using Physiological Signals (DEAP) dataset and 93.57% on the SJTU Emotion EEG Dataset (SEED) dataset, demonstrating its exceptional performance in emotion recognition tasks. Extensive experimental results have demonstrated that the CRTL method shows significant improvements compared to TL by effectively extracting the most beneficial knowledge from the source domain.
{"title":"Contrastive reinforced transfer learning for EEG-based emotion recognition with consideration of individual differences","authors":"Zhibang Zang ,&nbsp;Xiangkun Yu ,&nbsp;Baole Fu ,&nbsp;Yinhua Liu ,&nbsp;Shuzhi Sam Ge","doi":"10.1016/j.bspc.2025.107622","DOIUrl":"10.1016/j.bspc.2025.107622","url":null,"abstract":"<div><div>Electroencephalography (EEG) data exhibit significant individual differences, posing challenges in generalizing emotion recognition models across individuals. Transfer learning (TL) can leverage knowledge obtained from other individuals to help models better account for these individual differences. Existing research predominantly focuses on minimizing negative transfer, while this paper aims to mitigate negative transfer and maximize positive transfer. A fundamental deep transfer learning system was examined to achieve maximum positive transfer. An optimal strategy was formulated to choose the most adaptable knowledge, specifically EEG features, from the source domain to accommodate target individual differences. To achieve this optimal strategy, a reinforcement learning algorithm, specifically the Q-learning algorithm, was applied to extract the most beneficial knowledge from the source domain by the Control Agent (CA). Meanwhile, to more accurately assess the effectiveness of actions in reinforcement learning, a direct comparison between the Reinforced Transfer Learning (RTL) and TL methods is conducted during the learning process, and the Contrastive Reinforced Transfer Learning (CRTL) method is finally proposed. The proposed CRTL model achieved average recognition accuracies of 91.26% (valence) and 90.43% (arousal) on the Database for Emotion Analysis using Physiological Signals (DEAP) dataset and 93.57% on the SJTU Emotion EEG Dataset (SEED) dataset, demonstrating its exceptional performance in emotion recognition tasks. Extensive experimental results have demonstrated that the CRTL method shows significant improvements compared to TL by effectively extracting the most beneficial knowledge from the source domain.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107622"},"PeriodicalIF":4.9,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143419932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-driven segmentation of ischemic stroke lesions using multi-channel MRI
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-15 DOI: 10.1016/j.bspc.2025.107676
Ashiqur Rahman , Muhammad E.H. Chowdhury , Md Sharjis Ibne Wadud , Rusab Sarmun , Adam Mushtak , Sohaib Bassam Zoghoul , Israa Al-Hashimi
Ischemic stroke, caused by cerebral vessel occlusion, presents substantial challenges in medical imaging due to the variability and subtlety of stroke lesions. Magnetic Resonance Imaging (MRI) plays a crucial role in diagnosing and managing ischemic stroke, yet existing segmentation techniques often fail to accurately delineate lesions. This study introduces a novel deep learning-based method for segmenting ischemic stroke lesions using multi-channel MRI modalities, including Diffusion Weighted Imaging (DWI), Apparent Diffusion Coefficient (ADC), and enhanced Diffusion Weighted Imaging (eDWI). The proposed architecture integrates DenseNet121 as the encoder with Self-Organized Operational Neural Networks (SelfONN) in the decoder, enhanced by Channel and Space Compound Attention (CSCA) and Double Squeeze-and-Excitation (DSE) blocks. Additionally, a custom loss function combining Dice Loss and Jaccard Loss with weighted averages is introduced to improve model performance. Trained and evaluated on the ISLES 2022 dataset, the model achieved Dice Similarity Coefficients (DSC) of 83.88 % using DWI alone, 85.86 % with DWI and ADC, and 87.49 % with the integration of DWI, ADC, and eDWI. This approach not only outperforms existing methods but also addresses key limitations in current segmentation practices. These advancements significantly enhance diagnostic precision and treatment planning for ischemic stroke, providing valuable support for clinical decision-making.
缺血性中风是由脑血管闭塞引起的,由于中风病变的多变性和微妙性,给医学成像带来了巨大挑战。磁共振成像(MRI)在诊断和治疗缺血性中风中发挥着至关重要的作用,然而现有的分割技术往往无法准确地划分病灶。本研究介绍了一种基于深度学习的新方法,用于使用多通道磁共振成像模式(包括扩散加权成像(DWI)、表观扩散系数(ADC)和增强扩散加权成像(eDWI))分割缺血性中风病灶。所提出的架构将 DenseNet121 作为编码器,将自组织运算神经网络(SelfONN)作为解码器,并通过通道和空间复合注意(CSCA)和双挤压-激发(DSE)块进行增强。此外,为提高模型性能,还引入了一个自定义损失函数,该函数结合了 Dice Loss 和 Jaccard Loss 以及加权平均值。该模型在 ISLES 2022 数据集上进行了训练和评估,单独使用 DWI 时的骰子相似系数 (DSC) 为 83.88%,使用 DWI 和 ADC 时为 85.86%,整合 DWI、ADC 和 eDWI 时为 87.49%。这种方法不仅优于现有方法,还解决了当前分割方法的主要局限性。这些进步大大提高了缺血性中风的诊断精确度和治疗计划,为临床决策提供了宝贵的支持。
{"title":"Deep learning-driven segmentation of ischemic stroke lesions using multi-channel MRI","authors":"Ashiqur Rahman ,&nbsp;Muhammad E.H. Chowdhury ,&nbsp;Md Sharjis Ibne Wadud ,&nbsp;Rusab Sarmun ,&nbsp;Adam Mushtak ,&nbsp;Sohaib Bassam Zoghoul ,&nbsp;Israa Al-Hashimi","doi":"10.1016/j.bspc.2025.107676","DOIUrl":"10.1016/j.bspc.2025.107676","url":null,"abstract":"<div><div>Ischemic stroke, caused by cerebral vessel occlusion, presents substantial challenges in medical imaging due to the variability and subtlety of stroke lesions. Magnetic Resonance Imaging (MRI) plays a crucial role in diagnosing and managing ischemic stroke, yet existing segmentation techniques often fail to accurately delineate lesions. This study introduces a novel deep learning-based method for segmenting ischemic stroke lesions using multi-channel MRI modalities, including Diffusion Weighted Imaging (DWI), Apparent Diffusion Coefficient (ADC), and enhanced Diffusion Weighted Imaging (eDWI). The proposed architecture integrates DenseNet121 as the encoder with Self-Organized Operational Neural Networks (SelfONN) in the decoder, enhanced by Channel and Space Compound Attention (CSCA) and Double Squeeze-and-Excitation (DSE) blocks. Additionally, a custom loss function combining Dice Loss and Jaccard Loss with weighted averages is introduced to improve model performance. Trained and evaluated on the ISLES 2022 dataset, the model achieved Dice Similarity Coefficients (DSC) of 83.88 % using DWI alone, 85.86 % with DWI and ADC, and 87.49 % with the integration of DWI, ADC, and eDWI. This approach not only outperforms existing methods but also addresses key limitations in current segmentation practices. These advancements significantly enhance diagnostic precision and treatment planning for ischemic stroke, providing valuable support for clinical decision-making.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107676"},"PeriodicalIF":4.9,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143419404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IVCAN: An improved visual curve attention network for fNIRS-Based motor imagery/execution classification
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-15 DOI: 10.1016/j.bspc.2025.107679
Yu Li , Shuran Li , Zhizheng Yuan , Shaoqing Zhao , Feng Wan , Tao Xu , Haiyan Zhang , Hongtao Wang
As the advantages of high spatial resolution and portability, functional near-infrared spectroscopy (fNIRS)-based Motor Imagery/Execution (MI/ME) have become promising approaches and are widely used in daily rehabilitation for neural plasticity enhancement. However, in real rehabilitation training scenario, it is essential to enhance the accuracy and effectiveness of ME/MI and the visualization of brain activation. In this study, firstly, a curve attention for fNIRS-based MI/ME classification was proposed to capture spatial-feature and hemodynamic responses. Secondly, inspired by the visual attention network (VAN) used in image classification, we further designed a network combining curve attention and VAN, called IVCAN. To evaluate the performance of IVCAN, two public ME datasets (Datasets A and C) and one self-collected MI dataset (Dataset B) were applied for evaluation. The experimental results show that the average accuracies were 85.52 %, 75.78 %, and 61.73 %, respectively for these three datasets, while the cross-subject average accuracies were 84.20 %, 75.37 %, and 61.84 %, respectively. More interestingly, brain activation patterns across different tasks were analyzed and demonstrate that the MI task requires the synergistic activation of more brain regions, while the ME task necessitates intense activity in specific brain areas. Over all, on one hand, this work provides a new and unified decoding method for fNIRS-based MI/ME, on the other hand, it elucidates the differences and connections in brain processing of various tasks from a blood hemodynamic perspective. The commonalities and differences of brain activation found in this study provide guidance and solutions for addressing the universality and personalization of fNIRS-based brain-computer interfaces.
{"title":"IVCAN: An improved visual curve attention network for fNIRS-Based motor imagery/execution classification","authors":"Yu Li ,&nbsp;Shuran Li ,&nbsp;Zhizheng Yuan ,&nbsp;Shaoqing Zhao ,&nbsp;Feng Wan ,&nbsp;Tao Xu ,&nbsp;Haiyan Zhang ,&nbsp;Hongtao Wang","doi":"10.1016/j.bspc.2025.107679","DOIUrl":"10.1016/j.bspc.2025.107679","url":null,"abstract":"<div><div>As the advantages of high spatial resolution and portability, functional near-infrared spectroscopy (fNIRS)-based Motor Imagery/Execution (MI/ME) have become promising approaches and are widely used in daily rehabilitation for neural plasticity enhancement. However, in real rehabilitation training scenario, it is essential to enhance the accuracy and effectiveness of ME/MI and the visualization of brain activation. In this study, firstly, a curve attention for fNIRS-based MI/ME classification was proposed to capture spatial-feature and hemodynamic responses. Secondly, inspired by the visual attention network (VAN) used in image classification, we further designed a network combining curve attention and VAN, called IVCAN. To evaluate the performance of IVCAN, two public ME datasets (Datasets A and C) and one self-collected MI dataset (Dataset B) were applied for evaluation. The experimental results show that the average accuracies were 85.52 %, 75.78 %, and 61.73 %, respectively for these three datasets, while the cross-subject average accuracies were 84.20 %, 75.37 %, and 61.84 %, respectively. More interestingly, brain activation patterns across different tasks were analyzed and demonstrate that the MI task requires the synergistic activation of more brain regions, while the ME task necessitates intense activity in specific brain areas. Over all, on one hand, this work provides a new and unified decoding method for fNIRS-based MI/ME, on the other hand, it elucidates the differences and connections in brain processing of various tasks from a blood hemodynamic perspective. The commonalities and differences of brain activation found in this study provide guidance and solutions for addressing the universality and personalization of fNIRS-based brain-computer interfaces.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"104 ","pages":"Article 107679"},"PeriodicalIF":4.9,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143420976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Residual u-net with Self-Attention based deep convolutional adaptive capsule network for liver cancer segmentation and classification
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-15 DOI: 10.1016/j.bspc.2025.107665
R. Archana, L. Anand
Every year, almost 1.5 million people die from liver cancer around the world. The medical sector uses computed tomography (CT) to detect liver cancer early, potentially saving millions of lives per year. However, existing models for liver cancer segmentation and classification face significant challenges, including high computational complexity, limited accuracy, and inadequate handling of noisy data. These limitations necessitate the development of a simple, dependable, and accurate automated technique for interpreting, detecting, and analyzing CT scans. Thus, the proposed study aims to develop an automated deep-learning mechanism for segmenting and classifying liver cancer disease. Initially, the input images are gathered from publicly available datasets. Each sample is pre-processed to remove undesired noises using Modified Self-Guided Filtering (MSGF), and the images are scaled for improved processing. From the pre-processed images, the tumor-affected regions are segmented by introducing a new Residual Squeeze-Excited UNet (RSE_UNet) architecture. Here, residual blocks and squeeze-excitation blocks are incorporated into the UNet model to gain precise segmentation results. These segmented images serve as the input for the proposed Self-Attention based Deep Convolutional Adaptive Capsule Network (SA_DCACN) model. The deep convolutional layers help to attain essential features, and the dimensionality issues are solved by utilizing a self-attention mechanism. In addition, the adaptive capsule network aids in making effective classification of provided input images. Also, the ability of the proposed model is enhanced by fine-tuning its parameters through an Extended Gannet Optimization (EGO). Thus, the proposed study effectively segments and classifies the liver cancer disease from the input images with MATLAB. The proposed approach obtains 96.94% accuracy in the LiTS dataset and 97.79% accuracy in the 3D-IRCADb-01 dataset.
{"title":"Residual u-net with Self-Attention based deep convolutional adaptive capsule network for liver cancer segmentation and classification","authors":"R. Archana,&nbsp;L. Anand","doi":"10.1016/j.bspc.2025.107665","DOIUrl":"10.1016/j.bspc.2025.107665","url":null,"abstract":"<div><div>Every year, almost 1.5 million people die from liver cancer around the world. The medical sector uses computed tomography (CT) to detect liver cancer early, potentially saving millions of lives per year. However, existing models for liver cancer segmentation and classification face significant challenges, including high computational complexity, limited accuracy, and inadequate handling of noisy data. These limitations necessitate the development of a simple, dependable, and accurate automated technique for interpreting, detecting, and analyzing CT scans. Thus, the proposed study aims to develop an automated deep-learning mechanism for segmenting and classifying liver cancer disease. Initially, the input images are gathered from publicly available datasets. Each sample is pre-processed to remove undesired noises using Modified Self-Guided Filtering (MSGF), and the images are scaled for improved processing. From the pre-processed images, the tumor-affected regions are segmented by introducing a new Residual Squeeze-Excited UNet (RSE_UNet) architecture. Here, residual blocks and squeeze-excitation blocks are incorporated into the UNet model to gain precise segmentation results. These segmented images serve as the input for the proposed Self-Attention based Deep Convolutional Adaptive Capsule Network (SA_DCACN) model. The deep convolutional layers help to attain essential features, and the dimensionality issues are solved by utilizing a self-attention mechanism. In addition, the adaptive capsule network aids in making effective classification of provided input images. Also, the ability of the proposed model is enhanced by fine-tuning its parameters through an Extended Gannet Optimization (EGO). Thus, the proposed study effectively segments and classifies the liver cancer disease from the input images with MATLAB. The proposed approach obtains 96.94% accuracy in the LiTS dataset and 97.79% accuracy in the 3D-IRCADb-01 dataset.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107665"},"PeriodicalIF":4.9,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143419940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HSDG: A dual-prior semantic driven entropy grouping snapshot medical hyperspectral tongue image reconstruction method
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-14 DOI: 10.1016/j.bspc.2025.107689
HuiYuan Zhang , ZhaoHua Yang , YiJing Chen , QianYue Tan , ZeYuan Dong , ChunYong Wang

Background and Objective

Tongue hyperspectral imaging (THSI) provides rich spectral information, which is crucial for various medical tasks. However, existing hyperspectral acquisition devices involve trade-offs between image quality, acquisition time, and cost. Additionally, the jitter characteristics during tongue image acquisition necessitate snapshot image reconstruction, and there is a lack of established tongue hyperspectral datasets. This research aims to develop a high-quality and application-potential reconstruction method for snapshot spectral imaging. The reconstruction method will be implemented in low-cost, self-developed spectral imaging devices to establish valuable tongue hyperspectral image datasets.

Methodology

First, This study proposes a Hierarchical Semantic-Driven Grouping (HSDG) reconstruction method based on two types of semantic annotations. This method effectively addresses the phenomenon of metamerism and the disruption of local spectral correlations caused by considering only global associations. It achieves this by calculating the information entropy of two types of semantic features to perform a reasonable spectral grouping and reconstruction. We validated the effectiveness and advancement of our method through three evaluation metrics for segmentation experiments and three image quality assessment metrics, comparing it with ten state-of-the-art reconstruction algorithms.

Results

According to the training results, compared to the best existing method, the proposed approach improved image quality by 1.36 dB and increased segmentation accuracy by 6.62 %. It can reconstruct clear detailed information and provide satisfactory spectral curves. The Peak Signal-to-Noise Ratio (PSNR) reached 34.15, the Structural Similarity (SSIM) index reached 0.8663, and the accuracy for image segmentation reached 0.9497.

Conclusion

The proposed HSDG addresses the severe issue of information degradation caused by compressing 24 spectral bands into a single dimension under different position light source modulations. It achieves this through dual prior semantic information grouping. Additionally, through tongue feature segmentation experiments, the proposed reconstruction method has been validated as most closely resembling the actual images, capable of delineating clear distinctions in tongue cracks and coating areas.
{"title":"HSDG: A dual-prior semantic driven entropy grouping snapshot medical hyperspectral tongue image reconstruction method","authors":"HuiYuan Zhang ,&nbsp;ZhaoHua Yang ,&nbsp;YiJing Chen ,&nbsp;QianYue Tan ,&nbsp;ZeYuan Dong ,&nbsp;ChunYong Wang","doi":"10.1016/j.bspc.2025.107689","DOIUrl":"10.1016/j.bspc.2025.107689","url":null,"abstract":"<div><h3>Background and Objective</h3><div>Tongue hyperspectral imaging (THSI) provides rich spectral information, which is crucial for various medical tasks. However, existing hyperspectral acquisition devices involve trade-offs between image quality, acquisition time, and cost. Additionally, the jitter characteristics during tongue image acquisition necessitate snapshot image reconstruction, and there is a lack of established tongue hyperspectral datasets. This research aims to develop a high-quality and application-potential reconstruction method for snapshot spectral imaging. The reconstruction method will be implemented in low-cost, self-developed spectral imaging devices to establish valuable tongue hyperspectral image datasets.</div></div><div><h3>Methodology</h3><div>First, This study proposes a Hierarchical Semantic-Driven Grouping (HSDG) reconstruction method based on two types of semantic annotations. This method effectively addresses the phenomenon of metamerism and the disruption of local spectral correlations caused by considering only global associations. It achieves this by calculating the information entropy of two types of semantic features to perform a reasonable spectral grouping and reconstruction. We validated the effectiveness and advancement of our method through three evaluation metrics for segmentation experiments and three image quality assessment metrics, comparing it with ten state-of-the-art reconstruction algorithms.</div></div><div><h3>Results</h3><div>According to the training results, compared to the best existing method, the proposed approach improved image quality by 1.36 dB and increased segmentation accuracy by 6.62 %. It can reconstruct clear detailed information and provide satisfactory spectral curves. The Peak Signal-to-Noise Ratio (PSNR) reached 34.15, the Structural Similarity (SSIM) index reached 0.8663, and the accuracy for image segmentation reached 0.9497.</div></div><div><h3>Conclusion</h3><div>The proposed HSDG addresses the severe issue of information degradation caused by compressing 24 spectral bands into a single dimension under different position light source modulations. It achieves this through dual prior semantic information grouping. Additionally, through tongue feature segmentation experiments, the proposed reconstruction method has been validated as most closely resembling the actual images, capable of delineating clear distinctions in tongue cracks and coating areas.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107689"},"PeriodicalIF":4.9,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143419472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomedical Signal Processing and Control
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1