首页 > 最新文献

Biomedical Signal Processing and Control最新文献

英文 中文
Metrics for comparison of image dataset and segmentation methods for fractal analysis of retinal vasculature
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-12 DOI: 10.1016/j.bspc.2025.107650
Asmae Igalla El-Youssfi, José Manuel López-Alonso
Fractal analysis of images of the retinal vasculature is an instrument that has proven to be of great value both for the characterization of various pathologies and for the study of the vasculature in healthy retinas. To quantify this parameter, it is necessary to consider the treatment of the fractal object and the analysis conditions to ensure the validity of the results. Fractal and multifractal analysis of the retinal vasculature depends on several factors, including the fractal methods applied, the segmentation algorithm and calculation used, and especially the quality of the retinal image which directly influences the accuracy of the segmentation. These factors can influence the calculation and analysis of the fractal or multifractal dimensions. In the present work, different metrics have been developed to quantify the differences introduced by different segmentation methods and image datasets. Using the developed metrics, it has been possible to determine and quantify the influence of the factors studied effectively. The results indicate that the developed metrics allow to quantify these differences, as well as provide criteria on which are the best methods and protocols, which is relevant when using fractal and multifractal methods as an aid in retinal characterization and in the diagnosis of different anomalies.
{"title":"Metrics for comparison of image dataset and segmentation methods for fractal analysis of retinal vasculature","authors":"Asmae Igalla El-Youssfi,&nbsp;José Manuel López-Alonso","doi":"10.1016/j.bspc.2025.107650","DOIUrl":"10.1016/j.bspc.2025.107650","url":null,"abstract":"<div><div>Fractal analysis of images of the retinal vasculature is an instrument that has proven to be of great value both for the characterization of various pathologies and for the study of the vasculature in healthy retinas. To quantify this parameter, it is necessary to consider the treatment of the fractal object and the analysis conditions to ensure the validity of the results. Fractal and multifractal analysis of the retinal vasculature depends on several factors, including the fractal methods applied, the segmentation algorithm and calculation used, and especially the quality of the retinal image which directly influences the accuracy of the segmentation. These factors can influence the calculation and analysis of the fractal or multifractal dimensions. In the present work, different metrics have been developed to quantify the differences introduced by different segmentation methods and image datasets. Using the developed metrics, it has been possible to determine and quantify the influence of the factors studied effectively. The results indicate that the developed metrics allow to quantify these differences, as well as provide criteria on which are the best methods and protocols, which is relevant when using fractal and multifractal methods as an aid in retinal characterization and in the diagnosis of different anomalies.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107650"},"PeriodicalIF":4.9,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Airway segmentation using Uncertainty-based Double Attention Detail Supplement Network
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-12 DOI: 10.1016/j.bspc.2025.107648
Dexu Wang , Ziyan Huang , Jingyang Zhang , Wei Wu , Zhikai Yang , Lixu Gu
Automatic pulmonary airway segmentation from thoracic computed tomography (CT) is an essential step for the diagnosis and interventional surgical treatment of pulmonary disease. While deep learning algorithms have shown promising results in segmenting the main and larger bronchi, segmentation of the distal small bronchi remains challenging due to their limited size and divergent spatial distribution. The study aims to address the challenges associated with segmenting the pulmonary airway, particularly focusing on the distal small bronchi. Specifically, we aim to improve the accuracy and completeness of airway segmentation by developing a novel deep-learning model. To achieve this purpose, we propose an Uncertainty-based Double Attention Detail Supplement Network (UDADS-Net) to identify and supply these missing details of the airway. We introduce the Uncertainty-based Double Attention Module (UDA), which utilizes the uncertainty-based attention module to obtain the regions with high uncertainty and utilizes another attention module to identify the missing details. Moreover, we also propose the Adaptive Multi-scale Module (AMS) to optimize the process of extracting details. Evaluation of our method on the ATM’2022 airway segmentation dataset demonstrates its effectiveness, especially for segmenting distal small bronchi. Our method significantly reduces missing and fragmented parts, leading to more accurate and complete airway segmentation, and achieving higher evaluation metrics compared to the state-of-the-art (SOTA) methods.
{"title":"Airway segmentation using Uncertainty-based Double Attention Detail Supplement Network","authors":"Dexu Wang ,&nbsp;Ziyan Huang ,&nbsp;Jingyang Zhang ,&nbsp;Wei Wu ,&nbsp;Zhikai Yang ,&nbsp;Lixu Gu","doi":"10.1016/j.bspc.2025.107648","DOIUrl":"10.1016/j.bspc.2025.107648","url":null,"abstract":"<div><div>Automatic pulmonary airway segmentation from thoracic computed tomography (CT) is an essential step for the diagnosis and interventional surgical treatment of pulmonary disease. While deep learning algorithms have shown promising results in segmenting the main and larger bronchi, segmentation of the distal small bronchi remains challenging due to their limited size and divergent spatial distribution. The study aims to address the challenges associated with segmenting the pulmonary airway, particularly focusing on the distal small bronchi. Specifically, we aim to improve the accuracy and completeness of airway segmentation by developing a novel deep-learning model. To achieve this purpose, we propose an Uncertainty-based Double Attention Detail Supplement Network (UDADS-Net) to identify and supply these missing details of the airway. We introduce the Uncertainty-based Double Attention Module (UDA), which utilizes the uncertainty-based attention module to obtain the regions with high uncertainty and utilizes another attention module to identify the missing details. Moreover, we also propose the Adaptive Multi-scale Module (AMS) to optimize the process of extracting details. Evaluation of our method on the ATM’2022 airway segmentation dataset demonstrates its effectiveness, especially for segmenting distal small bronchi. Our method significantly reduces missing and fragmented parts, leading to more accurate and complete airway segmentation, and achieving higher evaluation metrics compared to the state-of-the-art (SOTA) methods.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107648"},"PeriodicalIF":4.9,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting athletic injuries with deep Learning: Evaluating CNNs and RNNs for enhanced performance and Safety
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-12 DOI: 10.1016/j.bspc.2025.107692
Mohammad Mohsen Sadr , Mohsen Khani , Saeb Morady Tootkaleh
Identifying and predicting sports injuries is crucial for managing athletes’ performance and health. Recent advancements in deep learning have emerged as powerful tools for analyzing complex data and detecting injury patterns. This study investigates the effectiveness of deep learning algorithms, particularly Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs), in identifying and predicting injury patterns in athletes. Biometric data and motion videos from training sessions were collected and analyzed, focusing on RNN architectures, including Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU). The models were trained on diverse datasets and evaluated using performance metrics such as accuracy, precision, recall, and F1-score. The results indicate that the LSTM model achieved the highest accuracy at 91.5%, outperforming both the GRU model (90.8%) and the CNN model (89.2%). The precision and recall rates for the LSTM model were 89.7% and 88.3%, respectively, solidifying its superiority in the precise identification of potential injury patterns compared to CNNs. These findings highlight the capability of deep learning algorithms, particularly RNNs, in effectively predicting and managing sports injuries. This research emphasizes the importance of leveraging deep learning techniques for injury prevention and suggests future studies should focus on enhancing model accuracy through diverse and comprehensive datasets.
{"title":"Predicting athletic injuries with deep Learning: Evaluating CNNs and RNNs for enhanced performance and Safety","authors":"Mohammad Mohsen Sadr ,&nbsp;Mohsen Khani ,&nbsp;Saeb Morady Tootkaleh","doi":"10.1016/j.bspc.2025.107692","DOIUrl":"10.1016/j.bspc.2025.107692","url":null,"abstract":"<div><div>Identifying and predicting sports injuries is crucial for managing athletes’ performance and health. Recent advancements in deep learning have emerged as powerful tools for analyzing complex data and detecting injury patterns. This study investigates the effectiveness of deep learning algorithms, particularly Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs), in identifying and predicting injury patterns in athletes. Biometric data and motion videos from training sessions were collected and analyzed, focusing on RNN architectures, including Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU). The models were trained on diverse datasets and evaluated using performance metrics such as accuracy, precision, recall, and F1-score. The results indicate that the LSTM model achieved the highest accuracy at 91.5%, outperforming both the GRU model (90.8%) and the CNN model (89.2%). The precision and recall rates for the LSTM model were 89.7% and 88.3%, respectively, solidifying its superiority in the precise identification of potential injury patterns compared to CNNs. These findings highlight the capability of deep learning algorithms, particularly RNNs, in effectively predicting and managing sports injuries. This research emphasizes the importance of leveraging deep learning techniques for injury prevention and suggests future studies should focus on enhancing model accuracy through diverse and comprehensive datasets.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107692"},"PeriodicalIF":4.9,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143395604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LRCTNet: A lightweight rectal cancer T-staging network based on knowledge distillation via a pretrained swin transformer
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-12 DOI: 10.1016/j.bspc.2025.107696
Jia Yan , Peng Liu , Tingwei Xiong , Mingye Han , Qingzhu Jia , Yixing Gao
Rectal cancer, a prevalent malignant neoplasm within the digestive system, significantly jeopardizes patient health and quality of life. Accurate preoperative T-staging is critical for developing effective treatment strategies. In areas with limited medical resources, computed tomography (CT) has become the norm because of its popularity and economy and is an important method for the initial diagnosis of disease. Despite major advancements in computer vision in recent years, large-scale models have high demands on hardware and datasets, making them difficult to use and deploy in resource-limited environments. To address this challenge, we designed two lightweight modules, LightFire and ResLightFire, and developed a lightweight rectal cancer T-staging network (LRCTNet). On this basis, we leveraged the swin transformer, transfer learning and knowledge distillation techniques to optimize the classification performance of the LRCTNet. The experimental results revealed that LRCTNet achieved a classification accuracy of 95.79%, precision of 93.91%, recall of 93.48%, F1 score of 93.70%, and Matthews correlation coefficient (MCC) of 94.38% while containing only 0.407 million parameters, which were much higher than those of lightweight models such as SqueezeNet, MobileNet, and EfficientNet. These results indicate that the model achieves a low misclassification rate and a low rate of missed detections, ensuring balanced performance in classification. The lightweight design of LRCTNet enables efficient deployment in resource-constrained environments without sacrificing accuracy, making it a valuable tool for rectal cancer diagnosis.
{"title":"LRCTNet: A lightweight rectal cancer T-staging network based on knowledge distillation via a pretrained swin transformer","authors":"Jia Yan ,&nbsp;Peng Liu ,&nbsp;Tingwei Xiong ,&nbsp;Mingye Han ,&nbsp;Qingzhu Jia ,&nbsp;Yixing Gao","doi":"10.1016/j.bspc.2025.107696","DOIUrl":"10.1016/j.bspc.2025.107696","url":null,"abstract":"<div><div>Rectal cancer, a prevalent malignant neoplasm within the digestive system, significantly jeopardizes patient health and quality of life. Accurate preoperative T-staging is critical for developing effective treatment strategies. In areas with limited medical resources, computed tomography (CT) has become the norm because of its popularity and economy and is an important method for the initial diagnosis of disease. Despite major advancements in computer vision in recent years, large-scale models have high demands on hardware and datasets, making them difficult to use and deploy in resource-limited environments. To address this challenge, we designed two lightweight modules, LightFire and ResLightFire, and developed a lightweight rectal cancer T-staging network (LRCTNet). On this basis, we leveraged the swin transformer, transfer learning and knowledge distillation techniques to optimize the classification performance of the LRCTNet. The experimental results revealed that LRCTNet achieved a classification accuracy of 95.79%, precision of 93.91%, recall of 93.48%, F1 score of 93.70%, and Matthews correlation coefficient (MCC) of 94.38% while containing only 0.407 million parameters, which were much higher than those of lightweight models such as SqueezeNet, MobileNet, and EfficientNet. These results indicate that the model achieves a low misclassification rate and a low rate of missed detections, ensuring balanced performance in classification. The lightweight design of LRCTNet enables efficient deployment in resource-constrained environments without sacrificing accuracy, making it a valuable tool for rectal cancer diagnosis.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107696"},"PeriodicalIF":4.9,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143387520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoding emotions through personalized multi-modal fNIRS-EEG Systems: Exploring deterministic fusion techniques
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-12 DOI: 10.1016/j.bspc.2025.107632
Alireza F. Nia, Vanessa Tang, Gonzalo D. Maso Talou, Mark Billinghurst

Objective:

Emotion recognition through cortical neurovascular measurements poses significant challenges due to the complex interplay of emotions within cortical activity. Although Electroencephalography (EEG) and functional Near-Infrared Spectroscopy (fNIRS) represent a promising combination for developing robust and accurate affective systems, the strategies for their integration remain relatively unexplored. This study aims to address this gap in personalized multimodal fNIRS-EEG emotion recognition systems.

Methods:

Confronted with the scarcity of large-scale multimodal fNIRS and EEG databases for emotion analysis, we developed our own dataset tailored for this research. We assessed widely used conventional models in single-modality emotion recognition to establish a benchmark. Various feature-level and decision-level fusion techniques were then implemented and evaluated against this benchmark.

Results:

The multimodal fNIRS-EEG system consistently outperformed single-modality systems, achieving notable accuracy improvements of 7.5%, 3%, and 6.5% across the valence, arousal, and dominance dimensions, respectively. These improvements underscore the robustness of multimodal systems in emotion detection. Additionally, the top-performing feature-level fusion technique (direct concatenation) was on par with the best decision-level technique (average-based soft voting) across all dimensions.

Conclusions:

Our findings confirm the advantages of the multimodal fNIRS-EEG system over single-modality systems emphasizing on both individual characteristics of each modality and complementary nature of their hybrid system and suggest avenues for refining personalized emotion recognition approaches. This highlights the need for further investigation into optimized fusion techniques within alternative emotion recognition pipelines, paving the way for more nuanced and effective methodologies.
{"title":"Decoding emotions through personalized multi-modal fNIRS-EEG Systems: Exploring deterministic fusion techniques","authors":"Alireza F. Nia,&nbsp;Vanessa Tang,&nbsp;Gonzalo D. Maso Talou,&nbsp;Mark Billinghurst","doi":"10.1016/j.bspc.2025.107632","DOIUrl":"10.1016/j.bspc.2025.107632","url":null,"abstract":"<div><h3>Objective:</h3><div>Emotion recognition through cortical neurovascular measurements poses significant challenges due to the complex interplay of emotions within cortical activity. Although Electroencephalography (EEG) and functional Near-Infrared Spectroscopy (fNIRS) represent a promising combination for developing robust and accurate affective systems, the strategies for their integration remain relatively unexplored. This study aims to address this gap in personalized multimodal fNIRS-EEG emotion recognition systems.</div></div><div><h3>Methods:</h3><div>Confronted with the scarcity of large-scale multimodal fNIRS and EEG databases for emotion analysis, we developed our own dataset tailored for this research. We assessed widely used conventional models in single-modality emotion recognition to establish a benchmark. Various feature-level and decision-level fusion techniques were then implemented and evaluated against this benchmark.</div></div><div><h3>Results:</h3><div>The multimodal fNIRS-EEG system consistently outperformed single-modality systems, achieving notable accuracy improvements of 7.5%, 3%, and 6.5% across the valence, arousal, and dominance dimensions, respectively. These improvements underscore the robustness of multimodal systems in emotion detection. Additionally, the top-performing feature-level fusion technique (direct concatenation) was on par with the best decision-level technique (average-based soft voting) across all dimensions.</div></div><div><h3>Conclusions:</h3><div>Our findings confirm the advantages of the multimodal fNIRS-EEG system over single-modality systems emphasizing on both individual characteristics of each modality and complementary nature of their hybrid system and suggest avenues for refining personalized emotion recognition approaches. This highlights the need for further investigation into optimized fusion techniques within alternative emotion recognition pipelines, paving the way for more nuanced and effective methodologies.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107632"},"PeriodicalIF":4.9,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143387499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A nested self-supervised learning framework for 3-D semantic segmentation-driven multi-modal medical image fusion
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-12 DOI: 10.1016/j.bspc.2025.107653
Zhang Ying , Rencan Nie , Jinde Cao , Chaozhen Ma , Mingchuan Tan
The successful fusion of 3-D multi-modal medical images depends on both specific characteristics unique to each imaging mode as well as consistent spatial semantic features among all modes. However, the inherent variability in the appearance of these images poses a significant challenge to reliable learning of semantic information. To address this issue, this paper proposes a nested self-supervised learning framework for 3-D semantic segmentation-driven multi-modal medical image fusion. The proposed approach utilizes contrastive learning to effectively extract specified multi-scale features from each mode using U-Net (CU-Net). Subsequently, it employs geometric spatial consistency learning through a fusion convolutional decoder (FCD) and a geometric matching network (GMN) to ensure consistent acquisition of semantic representation within the same 3-D regions across multiple modalities. Additionally, a hybrid multi-level loss is introduced to facilitate the learning process of fused images. Ultimately, we leverage optimally specified multi-modal features for fusion and brain tumor lesion segmentation. The proposed approach enables cooperative learning between 3-D fusion and segmentation tasks by employing an innovative nested self-supervised strategy, thereby successfully striking a harmonious balance between semantic consistency and visual specificity during the extraction of multi-modal features. The fusion results demonstrated a mean classification SSIM, PSNR, NMI, and SFR of 0.9310, 27.8861, 1.5403, and 1.0896 respectively. The segmentation results revealed a mean classification Dice, sensitivity (Sen), specificity (Spe), and accuracy (Acc) of 0.8643, 0.8736, 0.9915, and 0.9911 correspondingly. The experimental findings demonstrate that our approach outperforms 11 other state-of-the-art fusion methods and 5 classical U-Net-based segmentation methods in terms of 4 objective metrics and qualitative evaluation. The code of the proposed method is available at https://github.com/ImZhangyYing/NLSF.
{"title":"A nested self-supervised learning framework for 3-D semantic segmentation-driven multi-modal medical image fusion","authors":"Zhang Ying ,&nbsp;Rencan Nie ,&nbsp;Jinde Cao ,&nbsp;Chaozhen Ma ,&nbsp;Mingchuan Tan","doi":"10.1016/j.bspc.2025.107653","DOIUrl":"10.1016/j.bspc.2025.107653","url":null,"abstract":"<div><div>The successful fusion of 3-D multi-modal medical images depends on both specific characteristics unique to each imaging mode as well as consistent spatial semantic features among all modes. However, the inherent variability in the appearance of these images poses a significant challenge to reliable learning of semantic information. To address this issue, this paper proposes a nested self-supervised learning framework for 3-D semantic segmentation-driven multi-modal medical image fusion. The proposed approach utilizes contrastive learning to effectively extract specified multi-scale features from each mode using U-Net (CU-Net). Subsequently, it employs geometric spatial consistency learning through a fusion convolutional decoder (FCD) and a geometric matching network (GMN) to ensure consistent acquisition of semantic representation within the same 3-D regions across multiple modalities. Additionally, a hybrid multi-level loss is introduced to facilitate the learning process of fused images. Ultimately, we leverage optimally specified multi-modal features for fusion and brain tumor lesion segmentation. The proposed approach enables cooperative learning between 3-D fusion and segmentation tasks by employing an innovative nested self-supervised strategy, thereby successfully striking a harmonious balance between semantic consistency and visual specificity during the extraction of multi-modal features. The fusion results demonstrated a mean classification SSIM, PSNR, NMI, and SFR of 0.9310, 27.8861, 1.5403, and 1.0896 respectively. The segmentation results revealed a mean classification Dice, sensitivity (Sen), specificity (Spe), and accuracy (Acc) of 0.8643, 0.8736, 0.9915, and 0.9911 correspondingly. The experimental findings demonstrate that our approach outperforms 11 other state-of-the-art fusion methods and 5 classical U-Net-based segmentation methods in terms of 4 objective metrics and qualitative evaluation. The code of the proposed method is available at <span><span>https://github.com/ImZhangyYing/NLSF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107653"},"PeriodicalIF":4.9,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143387500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating convolutional and transformer-based models for classifying Mild Cognitive Impairment using 2D spectral images of resting-state EEG 利用静息脑电图的二维频谱图像,研究基于卷积和变压器的轻度认知障碍分类模型
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-12 DOI: 10.1016/j.bspc.2025.107667
Mesut Şeker, Mehmet Siraç Özerdem
Alzheimer’s disease (AD) is the most common form of dementia among the elderly, leading to significant cognitive and functional impairments. Mild Cognitive Impairment (MCI) serves as a transitional stage that may precede dementia, with some individuals remaining stable, some improving, and others progressing to various types of dementia, including AD. Electroencephalography (EEG) has emerged as a valuable tool for early monitoring and diagnosis of dementia. This paper addresses the challenge of MCI classification using EEG data by exploring the effectiveness of Convolutional Neural Networks (CNNs) and Transformer-based models. We introduce an innovative methodology for converting non-linear raw EEG recordings into suitable input images for deep learning networks. The dataset comprises EEG recordings from 10 MCI patients and 10 Healthy Control (HC) subjects. We utilize spectral images of scalograms, spectrograms, and their hybrid forms as input sets due to their effectiveness in recognizing transitions in non-stationary signals. Our results demonstrate that CNNs, transfer learning architectures, hybrid architectures, and the transformer-based Vision Transformer (ViT) method effectively classify these images. The highest performance rates were achieved with spectrogram images, yielding accuracy rates of 0.9927 for CNN and 0.9938 for ViT, with ViT exhibiting greater stability during training. While CNNs excel at capturing local pixel interactions, they overlook global relationships within images. This study provides a comprehensive exploration of EEG-based MCI classification, highlighting the potential impact of our findings on clinical practices for dementia classification.
{"title":"Investigating convolutional and transformer-based models for classifying Mild Cognitive Impairment using 2D spectral images of resting-state EEG","authors":"Mesut Şeker,&nbsp;Mehmet Siraç Özerdem","doi":"10.1016/j.bspc.2025.107667","DOIUrl":"10.1016/j.bspc.2025.107667","url":null,"abstract":"<div><div>Alzheimer’s disease (AD) is the most common form of dementia among the elderly, leading to significant cognitive and functional impairments. Mild Cognitive Impairment (MCI) serves as a transitional stage that may precede dementia, with some individuals remaining stable, some improving, and others progressing to various types of dementia, including AD. Electroencephalography (EEG) has emerged as a valuable tool for early monitoring and diagnosis of dementia. This paper addresses the challenge of MCI classification using EEG data by exploring the effectiveness of Convolutional Neural Networks (CNNs) and Transformer-based models. We introduce an innovative methodology for converting non-linear raw EEG recordings into suitable input images for deep learning networks. The dataset comprises EEG recordings from 10 MCI patients and 10 Healthy Control (HC) subjects. We utilize spectral images of scalograms, spectrograms, and their hybrid forms as input sets due to their effectiveness in recognizing transitions in non-stationary signals. Our results demonstrate that CNNs, transfer learning architectures, hybrid architectures, and the transformer-based Vision Transformer (ViT) method effectively classify these images. The highest performance rates were achieved with spectrogram images, yielding accuracy rates of 0.9927 for CNN and 0.9938 for ViT, with ViT exhibiting greater stability during training. While CNNs excel at capturing local pixel interactions, they overlook global relationships within images. This study provides a comprehensive exploration of EEG-based MCI classification, highlighting the potential impact of our findings on clinical practices for dementia classification.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107667"},"PeriodicalIF":4.9,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143387519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast and detail-preserving low-dose CT denoising with diffusion model
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-12 DOI: 10.1016/j.bspc.2025.107580
Bo Su , Pengwei Dong , Xiangyun Hu , Benyang Wang , Yunfei Zha , Zijun Wu , Jun Wan
Low-dose CT imaging is effective at reducing patient radiation exposure and prolonging equipment lifespan. However, it often leads to increased image noise and artifacts. Conventional convolutional neural networks may result in loss of detail and excessive smoothing. While diffusion models can mitigate this issue, their relatively low denoising efficiency limits their practical applicability. Recent studies focus on optimizing the sampling strategy. The denoising performance of diffusion models is heavily dependent on the UNet network, whereas our approach complements these studies by designing a more efficient noise prediction network to enhance denoising performance. First, we propose a novel and efficient extended perceptual downsampling (EEP), which broadens the perceptual field to capture richer information while maintaining original channel feature integrity. Notably, its computational effort and number of parameters are negligible. Second, we propose a symmetric-aware self-attention mechanism (SA self-attention) that leverages the symmetry and cross-scale similarity inherent in medical images to compute longitudinal and transverse interleaved attention maps at reduced scales, thereby lowering computational complexity. Third, a meticulous scanning protocol is implemented for the head and chest body models to acquire normal-dose, 10%, and 25% low-dose datasets. This dataset is made publicly available as part of this study. Comprehensive experimental results demonstrate that our method outperforms contemporary approaches in terms of performance and generalizability. Notably, our method achieves the highest performance in blind evaluations conducted by radiologists, with an inference time deemed clinically acceptable.
{"title":"Fast and detail-preserving low-dose CT denoising with diffusion model","authors":"Bo Su ,&nbsp;Pengwei Dong ,&nbsp;Xiangyun Hu ,&nbsp;Benyang Wang ,&nbsp;Yunfei Zha ,&nbsp;Zijun Wu ,&nbsp;Jun Wan","doi":"10.1016/j.bspc.2025.107580","DOIUrl":"10.1016/j.bspc.2025.107580","url":null,"abstract":"<div><div>Low-dose CT imaging is effective at reducing patient radiation exposure and prolonging equipment lifespan. However, it often leads to increased image noise and artifacts. Conventional convolutional neural networks may result in loss of detail and excessive smoothing. While diffusion models can mitigate this issue, their relatively low denoising efficiency limits their practical applicability. Recent studies focus on optimizing the sampling strategy. The denoising performance of diffusion models is heavily dependent on the UNet network, whereas our approach complements these studies by designing a more efficient noise prediction network to enhance denoising performance. First, we propose a novel and efficient extended perceptual downsampling (EEP), which broadens the perceptual field to capture richer information while maintaining original channel feature integrity. Notably, its computational effort and number of parameters are negligible. Second, we propose a symmetric-aware self-attention mechanism (SA self-attention) that leverages the symmetry and cross-scale similarity inherent in medical images to compute longitudinal and transverse interleaved attention maps at reduced scales, thereby lowering computational complexity. Third, a meticulous scanning protocol is implemented for the head and chest body models to acquire normal-dose, 10%, and 25% low-dose datasets. This dataset is made publicly available as part of this study. Comprehensive experimental results demonstrate that our method outperforms contemporary approaches in terms of performance and generalizability. Notably, our method achieves the highest performance in blind evaluations conducted by radiologists, with an inference time deemed clinically acceptable.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107580"},"PeriodicalIF":4.9,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143387501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Apnea and hypopnea event detection using EEG, EMG, and sleep stage labels in a cohort of patients with suspected sleep apnea
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-12 DOI: 10.1016/j.bspc.2025.107628
Danny H. Zhang , Jeffrey Zhou , Joseph D. Wickens , Andrew G. Veale , Luke E. Hallum
Automating the screening, diagnosis, and monitoring of sleep apnea (SA) is potentially clinically useful. We present machine-learning models which detect SA and hypopnea events from the overnight electroencephalogram (EEG) and electromyogram (EMG), and we explain detection mechanisms. We tested four models using a novel data set comprising six-channel EEG and two-channel EMG recorded from 26 consecutive patients; recordings were expertly labeled with sleep stage and apnea/hypopnea events. For Model 1, EEG subband power and sample entropy were features used to train and test a random forest classifier. Model 2 was identical to Model 1, but we used EMG, not EEG. Model 3 was a simple decision strategy contingent upon sleep stage label. Model 4 was identical to Model 1, but we used EEG subband power, sample entropy, and sleep stage label. All models performed above chance (Matthews correlation coefficient, MCC > 0): Model 4 (leave-one-patient-out cross-validated MCC = 0.314) outperformed Model 3 (0.230) which outperformed Models 2 and 1 (0.147 and 0.154, respectively). Results indicate that sleep stage label alone is sufficient to detect apnea/hypopnea events. Either EMG or EEG subband power and sample entropy can be used to detect apnea/hypopnea events, but these EEG features likely reflect contamination by EMG. Indeed, EMG power was modulated by apnea/hypopnea event beginning and end, and similar modulation appeared in EEG power. Machine-learning approaches to the detection of apnea/hypopnea events using overnight EEG must be explainable; they must account for EMG contamination and sleep stage.
{"title":"Apnea and hypopnea event detection using EEG, EMG, and sleep stage labels in a cohort of patients with suspected sleep apnea","authors":"Danny H. Zhang ,&nbsp;Jeffrey Zhou ,&nbsp;Joseph D. Wickens ,&nbsp;Andrew G. Veale ,&nbsp;Luke E. Hallum","doi":"10.1016/j.bspc.2025.107628","DOIUrl":"10.1016/j.bspc.2025.107628","url":null,"abstract":"<div><div>Automating the screening, diagnosis, and monitoring of sleep apnea (SA) is potentially clinically useful. We present machine-learning models which detect SA and hypopnea events from the overnight electroencephalogram (EEG) and electromyogram (EMG), and we explain detection mechanisms. We tested four models using a novel data set comprising six-channel EEG and two-channel EMG recorded from 26 consecutive patients; recordings were expertly labeled with sleep stage and apnea/hypopnea events. For Model 1, EEG subband power and sample entropy were features used to train and test a random forest classifier. Model 2 was identical to Model 1, but we used EMG, not EEG. Model 3 was a simple decision strategy contingent upon sleep stage label. Model 4 was identical to Model 1, but we used EEG subband power, sample entropy, and sleep stage label. All models performed above chance (Matthews correlation coefficient, MCC <span><math><mo>&gt;</mo></math></span> 0): Model 4 (leave-one-patient-out cross-validated MCC = 0.314) outperformed Model 3 (0.230) which outperformed Models 2 and 1 (0.147 and 0.154, respectively). Results indicate that sleep stage label alone is sufficient to detect apnea/hypopnea events. Either EMG or EEG subband power and sample entropy can be used to detect apnea/hypopnea events, but these EEG features likely reflect contamination by EMG. Indeed, EMG power was modulated by apnea/hypopnea event beginning and end, and similar modulation appeared in EEG power. Machine-learning approaches to the detection of apnea/hypopnea events using overnight EEG must be explainable; they must account for EMG contamination and sleep stage.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107628"},"PeriodicalIF":4.9,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-modality framework for precise brain tumor detection and multi-class classification using hybrid GAN approach
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-11 DOI: 10.1016/j.bspc.2025.107559
S. Karpakam , N. Kumareshan
Brain tumor is a life-threatening disease requiring early diagnosis and treatment to improve patient outcomes. Magnetic resonance imaging (MRI) is widely used for diagnosing brain tumors, but existing MRI-based deep learning models often lack accuracy and efficiency. This work proposes a novel approach utilizing a Deep Image Recognition Generative Adversarial Network (DIR-GAN) for brain tumor detection and classification in MRI images. The methodology involves several key steps: Adaptive Bilateral Filtering (ABF) is employed to reduce noise while preserving edges, ensuring high-quality input images. Otsu-Gannet Segmentation (OGS) combines Otsu’s thresholding with the Gannet Optimization Algorithm for precise segmentation of tumor regions. Gray-Level Co-occurrence Matrix (GLCM) and the Enhanced Grasshopper Optimization Algorithm (EGOA), capturing essential characteristics of the segmented images. These extracted features are then fed into the DIR-GAN, which uses attention mechanisms and multi-scale feature extraction to generate synthetic MRI images and enhance classification accuracy. The DIR-GAN architecture includes a generator and a discriminator, trained simultaneously to improve feature recognition and classification capabilities. Developed in Python, the proposed models achieve accuracies of 98.86% and 98.40% on the Fig Share and MRI datasets, respectively, and 97.83% on the X-ray dataset. This innovative method offers a dependable and interpretable solution for the early diagnosis and classification of brain tumors, with the potential to enhance clinical outcomes for patients.
{"title":"A multi-modality framework for precise brain tumor detection and multi-class classification using hybrid GAN approach","authors":"S. Karpakam ,&nbsp;N. Kumareshan","doi":"10.1016/j.bspc.2025.107559","DOIUrl":"10.1016/j.bspc.2025.107559","url":null,"abstract":"<div><div>Brain tumor is a life-threatening disease requiring early diagnosis and treatment to improve patient outcomes. Magnetic resonance imaging (MRI) is widely used for diagnosing brain tumors, but existing MRI-based deep learning models often lack accuracy and efficiency. This work proposes a novel approach utilizing a Deep Image Recognition Generative Adversarial Network (DIR-GAN) for brain tumor detection and classification in MRI images. The methodology involves several key steps: Adaptive Bilateral Filtering (ABF) is employed to reduce noise while preserving edges, ensuring high-quality input images. Otsu-Gannet Segmentation (OGS) combines Otsu’s thresholding with the Gannet Optimization Algorithm for precise segmentation of tumor regions. Gray-Level Co-occurrence Matrix (GLCM) and the Enhanced Grasshopper Optimization Algorithm (EGOA), capturing essential characteristics of the segmented images. These extracted features are then fed into the DIR-GAN, which uses attention mechanisms and multi-scale feature extraction to generate synthetic MRI images and enhance classification accuracy. The DIR-GAN architecture includes a generator and a discriminator, trained simultaneously to improve feature recognition and classification capabilities. Developed in Python, the proposed models achieve accuracies of 98.86% and 98.40% on the Fig Share and MRI datasets, respectively, and 97.83% on the X-ray dataset. This innovative method offers a dependable and interpretable solution for the early diagnosis and classification of brain tumors, with the potential to enhance clinical outcomes for patients.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"104 ","pages":"Article 107559"},"PeriodicalIF":4.9,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143378459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomedical Signal Processing and Control
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1