首页 > 最新文献

Biomedical Signal Processing and Control最新文献

英文 中文
Speed and accuracy in Tandem: Deep Learning-Powered Millisecond-Level pulmonary embolism detection in CTA
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-05 DOI: 10.1016/j.bspc.2025.107792
Houde Wu , Ting Chen , Longshuang Wang , Li Guo

Background and Objectives

Pulmonary embolism (PE) is a critical medical condition that requires a rapid and accurate diagnosis. Traditional methods, although highly precise, have focused primarily on accuracy, neglecting the urgency of speed required in emergency settings. This study aims to develop a deep learning model that not only maintains high accuracy but also achieves millisecond-level PE detection speed.

Materials and Methods

This study employed an internal dataset comprising 160 patients from Tianjin Medical University General Hospital, and an external RSNA dataset for validation. Our model, built upon the YOLOv5 framework, was enhanced with Partial Convolution, a C2f module, and decoupled head structure.

Results

The internal test set achieved a recall of 82.5 %, precision of 84.2 %, and mean average precision (mAP) of 87.2 %, significantly outperforming the other leading models. Notably, our model provided an inference time of just 1.6 ms per image, setting a new benchmark for real-time PE detection, which was faster than YOLOv5 (2.9 ms), YOLOv6 (4.0 ms), and YOLOv8 (3.2 ms). Furthermore, our model demonstrated faster convergence and consistently lower loss values during training, achieving perfect precision at a significantly lower confidence threshold than other YOLO variants, highlighting its superior optimization and generalization capabilities.

Conclusion

This study successfully developed a deep learning model capable of millisecond-level PE detection without compromising the accuracy. Its performance underscores its potential to revolutionize PE diagnosis in emergency clinical settings, enabling timely and reliable intervention.
{"title":"Speed and accuracy in Tandem: Deep Learning-Powered Millisecond-Level pulmonary embolism detection in CTA","authors":"Houde Wu ,&nbsp;Ting Chen ,&nbsp;Longshuang Wang ,&nbsp;Li Guo","doi":"10.1016/j.bspc.2025.107792","DOIUrl":"10.1016/j.bspc.2025.107792","url":null,"abstract":"<div><h3>Background and Objectives</h3><div>Pulmonary embolism (PE) is a critical medical condition that requires a rapid and accurate diagnosis. Traditional methods, although highly precise, have focused primarily on accuracy, neglecting the urgency of speed required in emergency settings. This study aims to develop a deep learning model that not only maintains high accuracy but also achieves millisecond-level PE detection speed.</div></div><div><h3>Materials and Methods</h3><div>This study employed an internal dataset comprising 160 patients from Tianjin Medical University General Hospital, and an external RSNA dataset for validation. Our model, built upon the YOLOv5 framework, was enhanced with Partial Convolution, a C2f module, and decoupled head structure.</div></div><div><h3>Results</h3><div>The internal test set achieved a recall of 82.5 %, precision of 84.2 %, and mean average precision (mAP) of 87.2 %, significantly outperforming the other leading models. Notably, our model provided an inference time of just 1.6 ms per image, setting a new benchmark for real-time PE detection, which was faster than YOLOv5 (2.9 ms), YOLOv6 (4.0 ms), and YOLOv8 (3.2 ms). Furthermore, our model demonstrated faster convergence and consistently lower loss values during training, achieving perfect precision at a significantly lower confidence threshold than other YOLO variants, highlighting its superior optimization and generalization capabilities.</div></div><div><h3>Conclusion</h3><div>This study successfully developed a deep learning model capable of millisecond-level PE detection without compromising the accuracy. Its performance underscores its potential to revolutionize PE diagnosis in emergency clinical settings, enabling timely and reliable intervention.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107792"},"PeriodicalIF":4.9,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143551361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-branch channel attention enhancement feature fusion network for diabetic retinopathy segmentation
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-05 DOI: 10.1016/j.bspc.2025.107721
Lei Ma, Ziqian Liu, Qihang Xu, Hanyu Hong, Lei Wang, Ying Zhu, Yu Shi
Diabetic retinopathy (DR) is an eye disease caused by diabetes that leads to impaired vision and even blindness. DR segmentation technology can assist ophthalmologists with early diagnosis, which can help to prevent the progression of this disease. However, DR segmentation is a challenging task because of the large variation in scale, high inter-class similarity, complex structures, blurred edges and different brightness contrasts of different kinds of lesions. Most existing methods tend not to adequately extract the semantic information in the channels of lesion features, which is a critical element for effectively distinguishing lesion edges. In this paper, we propose a dual-branch channel attention enhancement feature fusion network that integrates CNN and Transformer for DR segmentation. First, we introduce a Channel Crossing Attention Module (CCAM) into the U-Net framework to eliminate semantic inconsistencies between the encoder and decoder for better integration of contextual information. Moreover, we leverage Transformer’s robust global information acquisition capabilities to acquire long-range information, and further enhance the contextual information. Finally, we build a Dual-branch Channel Attention Enhancement Fusion Module (DCAE) to enhance the semantic information of the channels in both branches, which improves the discriminability of the blurred edges of lesions. Compared with the state-of-the-art methods, our method improved mAUPR, mDice, and mIOU by 1.36%, 1.85%, and 2.20% on the IDRiD dataset, and by 4.62%, 0.20%, and 2.60% on the DDR dataset, respectively. The experimental results show that the multi-scale semantic features of the two branches are effectively fused, which achieves accurate lesion segmentation.
{"title":"Dual-branch channel attention enhancement feature fusion network for diabetic retinopathy segmentation","authors":"Lei Ma,&nbsp;Ziqian Liu,&nbsp;Qihang Xu,&nbsp;Hanyu Hong,&nbsp;Lei Wang,&nbsp;Ying Zhu,&nbsp;Yu Shi","doi":"10.1016/j.bspc.2025.107721","DOIUrl":"10.1016/j.bspc.2025.107721","url":null,"abstract":"<div><div>Diabetic retinopathy (DR) is an eye disease caused by diabetes that leads to impaired vision and even blindness. DR segmentation technology can assist ophthalmologists with early diagnosis, which can help to prevent the progression of this disease. However, DR segmentation is a challenging task because of the large variation in scale, high inter-class similarity, complex structures, blurred edges and different brightness contrasts of different kinds of lesions. Most existing methods tend not to adequately extract the semantic information in the channels of lesion features, which is a critical element for effectively distinguishing lesion edges. In this paper, we propose a dual-branch channel attention enhancement feature fusion network that integrates CNN and Transformer for DR segmentation. First, we introduce a Channel Crossing Attention Module (CCAM) into the U-Net framework to eliminate semantic inconsistencies between the encoder and decoder for better integration of contextual information. Moreover, we leverage Transformer’s robust global information acquisition capabilities to acquire long-range information, and further enhance the contextual information. Finally, we build a Dual-branch Channel Attention Enhancement Fusion Module (DCAE) to enhance the semantic information of the channels in both branches, which improves the discriminability of the blurred edges of lesions. Compared with the state-of-the-art methods, our method improved mAUPR, mDice, and mIOU by 1.36%, 1.85%, and 2.20% on the IDRiD dataset, and by 4.62%, 0.20%, and 2.60% on the DDR dataset, respectively. The experimental results show that the multi-scale semantic features of the two branches are effectively fused, which achieves accurate lesion segmentation.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107721"},"PeriodicalIF":4.9,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143551360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clinical knowledge integrated multi-task learning network for breast tumor segmentation and pathological complete response prediction
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-04 DOI: 10.1016/j.bspc.2025.107772
Wei Song , Xiang Pan , Ming Fan , Lihua Li
The accurate segmentation of breast tumors helps determine the boundaries and size of the tumor, providing crucial information for subsequent treatment planning. It also enables a more precise characterization of the tumor, which can be used to predict the patient’s response to neoadjuvant chemotherapy. Existing methodologies predominantly rely on single-task learning, overlooking the potential inter-task correlations inherent in multi-task learning. Moreover, the available clinical knowledge derived from medical reports is often overlooked in prior research, which is important for enhancing the understanding of disease progression and treatment outcomes. To address these problems, we propose a knowledge integrated multi-task learning (KIMTL) network that performs tumor segmentation and pathological complete response (pCR) prediction concurrently. Clinical knowledge is merged with extracted high-level image features to enhance prediction performance. The attention mechanism effectively leverages the inter-channel and inter-spatial relationships within features, thereby enhancing network effectiveness. The proposed multi-task learning network optimizes the balance between segmentation and prediction tasks using uncertainty weight loss. The experimental results from a dataset of 216 cases indicate that KIMTL could improve the performance of both tasks, particularly the prediction task (AUC = 0.816). Specifically, in the prediction task, the AUC increases from 0.789 to 0.816. In the segmentation task, the Jaccard index is improved from 0.710 to 0.740. Our study suggests that incorporating clinical domain knowledge into deep learning modeling can augment the performance of breast tumor segmentation and pCR prediction. KIMTL achieves promising performance and outperforms its single-task learning counterparts.
{"title":"Clinical knowledge integrated multi-task learning network for breast tumor segmentation and pathological complete response prediction","authors":"Wei Song ,&nbsp;Xiang Pan ,&nbsp;Ming Fan ,&nbsp;Lihua Li","doi":"10.1016/j.bspc.2025.107772","DOIUrl":"10.1016/j.bspc.2025.107772","url":null,"abstract":"<div><div>The accurate segmentation of breast tumors helps determine the boundaries and size of the tumor, providing crucial information for subsequent treatment planning. It also enables a more precise characterization of the tumor, which can be used to predict the patient’s response to neoadjuvant chemotherapy. Existing methodologies predominantly rely on single-task learning, overlooking the potential inter-task correlations inherent in multi-task learning. Moreover, the available clinical knowledge derived from medical reports is often overlooked in prior research, which is important for enhancing the understanding of disease progression and treatment outcomes. To address these problems, we propose a knowledge integrated multi-task learning (KIMTL) network that performs tumor segmentation and pathological complete response (pCR) prediction concurrently. Clinical knowledge is merged with extracted high-level image features to enhance prediction performance. The attention mechanism effectively leverages the inter-channel and inter-spatial relationships within features, thereby enhancing network effectiveness. The proposed multi-task learning network optimizes the balance between segmentation and prediction tasks using uncertainty weight loss. The experimental results from a dataset of 216 cases indicate that KIMTL could improve the performance of both tasks, particularly the prediction task (AUC = 0.816). Specifically, in the prediction task, the AUC increases from 0.789 to 0.816. In the segmentation task, the Jaccard index is improved from 0.710 to 0.740. Our study suggests that incorporating clinical domain knowledge into deep learning modeling can augment the performance of breast tumor segmentation and pCR prediction. KIMTL achieves promising performance and outperforms its single-task learning counterparts.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107772"},"PeriodicalIF":4.9,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143551375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SVVD-NET: A framework with relative position constraints for vertebral vertex detection
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-03 DOI: 10.1016/j.bspc.2025.107746
Yongkang Xu , Lianhong Duan , Zhicheng Zhang , Tiansheng Sun , Yang Zhang , Lixia Tian
Vertebral vertex detection is a fundamental step for subsequent spine image analysis and X-ray image-based spine disease diagnosis. Existing CNN-based frameworks for landmark detection can be directly used for automatic vertebral vertex detection. However, challenges such as overlapping vertebrae and vertebrae misalignment often arise when applying existing methods to vertebral vertex detection. To address the issues, we propose a sequential vertebral vertex detection network (SVVD-Net) that fully utilizes the regularity of vertebral alignment to generate vertebral vertices with relative positional constraints. Leveraging information about previously predicted vertebrae to identify the next one, the SVVD-Net could make sequential predictions and effectively avoid vertebrae overlapping and misalignment. We design an anatomy-aware encoder based on external attention mechanism, to address the anatomical information regarding the similarities in shape and alignment of vertebrae among samples. Structured mask is used in the decoder to reduce the direct influence of one vertebra upon its immediate neighbor and accordingly accommodate occasional subtle misalignment between two adjacent vertebrae. We evaluate the performance of SVVD-Net on two datasets of X-ray images of the spine. The results indicate that the proposed SVVD-Net consistently outperforms state-of-the-art methods. Ablation experiments further support the effectiveness of involved sequential landmark generation, anatomy-aware encoder and structured mask. Accordingly, this study presents a successful attempt to incorporate anatomical priors into medical image analysis.
{"title":"SVVD-NET: A framework with relative position constraints for vertebral vertex detection","authors":"Yongkang Xu ,&nbsp;Lianhong Duan ,&nbsp;Zhicheng Zhang ,&nbsp;Tiansheng Sun ,&nbsp;Yang Zhang ,&nbsp;Lixia Tian","doi":"10.1016/j.bspc.2025.107746","DOIUrl":"10.1016/j.bspc.2025.107746","url":null,"abstract":"<div><div>Vertebral vertex detection is a fundamental step for subsequent spine image analysis and X-ray image-based spine disease diagnosis. Existing CNN-based frameworks for landmark detection can be directly used for automatic vertebral vertex detection. However, challenges such as overlapping vertebrae and vertebrae misalignment often arise when applying existing methods to vertebral vertex detection. To address the issues, we propose a sequential vertebral vertex detection network (SVVD-Net) that fully utilizes the regularity of vertebral alignment to generate vertebral vertices with relative positional constraints. Leveraging information about previously predicted vertebrae to identify the next one, the SVVD-Net could make sequential predictions and effectively avoid vertebrae overlapping and misalignment. We design an anatomy-aware encoder based on external attention mechanism, to address the anatomical information regarding the similarities in shape and alignment of vertebrae among samples. Structured mask is used in the decoder to reduce the direct influence of one vertebra upon its immediate neighbor and accordingly accommodate occasional subtle misalignment between two adjacent vertebrae. We evaluate the performance of SVVD-Net on two datasets of X-ray images of the spine. The results indicate that the proposed SVVD-Net consistently outperforms state-of-the-art methods. Ablation experiments further support the effectiveness of involved sequential landmark generation, anatomy-aware encoder and structured mask. Accordingly, this study presents a successful attempt to incorporate anatomical priors into medical image analysis.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107746"},"PeriodicalIF":4.9,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143534374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diffusion probabilistic multi-cue level set for reducing edge uncertainty in pancreas segmentation
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-03 DOI: 10.1016/j.bspc.2025.107744
Yue Gou, Yuming Xing, Shengzhu Shi, Zhichang Guo
Accurately segmenting the pancreas remains a huge challenge. Traditional methods encounter difficulties in semantic localization due to the small volume and distorted structure of the pancreas, while deep learning methods encounter challenges in obtaining accurate edges because of low contrast and organ overlapping. To overcome these issues, we propose a multi-cue level set method based on the diffusion probabilistic model, namely Diff-mcs. Our method adopts a coarse-to-fine segmentation strategy. We use the diffusion probabilistic model in the coarse segmentation stage, with the obtained probability distribution serving as both the initial localization and prior cues for the level set method. In the fine segmentation stage, we combine the prior cues with grayscale cues and texture cues to refine the edge by maximizing the difference between probability distributions of the cues inside and outside the level set curve. The method is validated on three public datasets and achieves state-of-the-art performance, which can obtain more accurate segmentation results with lower uncertainty segmentation edges. In addition, we conduct ablation studies and uncertainty analysis to verify that the diffusion probability model provides a more appropriate initialization for the level set method. Furthermore, when combined with multiple cues, the level set method can better obtain edges and improve the overall accuracy. Our code is available at https://github.com/GOUYUEE/Diff-mcs.
{"title":"Diffusion probabilistic multi-cue level set for reducing edge uncertainty in pancreas segmentation","authors":"Yue Gou,&nbsp;Yuming Xing,&nbsp;Shengzhu Shi,&nbsp;Zhichang Guo","doi":"10.1016/j.bspc.2025.107744","DOIUrl":"10.1016/j.bspc.2025.107744","url":null,"abstract":"<div><div>Accurately segmenting the pancreas remains a huge challenge. Traditional methods encounter difficulties in semantic localization due to the small volume and distorted structure of the pancreas, while deep learning methods encounter challenges in obtaining accurate edges because of low contrast and organ overlapping. To overcome these issues, we propose a multi-cue level set method based on the diffusion probabilistic model, namely Diff-mcs. Our method adopts a coarse-to-fine segmentation strategy. We use the diffusion probabilistic model in the coarse segmentation stage, with the obtained probability distribution serving as both the initial localization and prior cues for the level set method. In the fine segmentation stage, we combine the prior cues with grayscale cues and texture cues to refine the edge by maximizing the difference between probability distributions of the cues inside and outside the level set curve. The method is validated on three public datasets and achieves state-of-the-art performance, which can obtain more accurate segmentation results with lower uncertainty segmentation edges. In addition, we conduct ablation studies and uncertainty analysis to verify that the diffusion probability model provides a more appropriate initialization for the level set method. Furthermore, when combined with multiple cues, the level set method can better obtain edges and improve the overall accuracy. Our code is available at <span><span>https://github.com/GOUYUEE/Diff-mcs</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107744"},"PeriodicalIF":4.9,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143528723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An intellectual autism spectrum disorder classification framework in healthcare industry using ViT-based adaptive deep learning model
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-03 DOI: 10.1016/j.bspc.2025.107737
Rama Parvathy , Rajesh Arunachalam , Sukumaran Damodaran , Muna Al-Razgan , Yasser A. Ali , Yogapriya J
Autism Spectrum Disorder (ASD) is a brain disease that mostly affects communication ability, object identification, cognitive capacity, interpersonal skills, and speech comprehension. Its primary origin is genetics, and intervention and diagnosis in an early stage can alleviate the requirement for expensive medical approaches and lengthy tests for ASD patients. Neuroimaging methods can be used to distinguish the composite biomarkers within the ASD based on functional connectivity abnormalities. However, the identification of ASD adopts symptom-based conditions through medical examination. Traditional automated techniques based on extensive aggregated datasets are likely to attain undependable diagnostic classification. Hence, it is essential to establish an efficient ASD classifier system with a deep learning approach to overcome the limitations of the classical models. The innovation of the developed work lies in the implementation of a novel deep learning approach named ViT-ARDNet-LSTM for classifying ASD utilizing MRI images. This framework integrates the merits of ViT, adaptive residual densenets, and LSTM models for efficiently classifying ASD. The developed model rectifies some issues such as the requirement for effective and accurate ASD diagnosis, the problems of conventional models, and the complexities of validating the complex MRI images. Especially, the suggested work resolves the problems of variability in the MRI images, the requirement of robust feature extraction, and the importance of optimizing the model parameters. By offering an effective and novel solution for ASD classification, the suggested work has the potential to improve the diagnosis accuracy, minimize the diagnosis time, and improve patient care. In the developed work, at first, significant MRI images are accumulated from benchmark resources and it is offered as input to the preprocessing stage. In this phase, Contrast Limited Adaptive Histogram Equalization (CLAHE) and bilateral filtering mechanisms are introduced to pre-process the gathered MRI images. Next, the pre-processed images are offered to the ASD classification stage. In this phase, an implemented mechanism named Vision Transformer-based Adaptive Residual Densenet with Long Short Term Memory layer (ViT-ARDNet-LSTM) is utilized to classify the ASD. Moreover, the parameters in ViT-ARDNet-LSTM are optimized using the Modified Zebra Optimization Algorithm (MZOA) for enhancing the functionality of classification. Lastly, the experimental validations are carried out for the developed work. The experimental results displayed that the suggested model attained 94% accuracy, precision, specificity, and sensitivity values when considering the sigmoid activation function. Also, the developed model achieved 5% FPR values. These results elucidate that the designed ASD classification framework outperforms the conventional models and improves timely diagnosis.
{"title":"An intellectual autism spectrum disorder classification framework in healthcare industry using ViT-based adaptive deep learning model","authors":"Rama Parvathy ,&nbsp;Rajesh Arunachalam ,&nbsp;Sukumaran Damodaran ,&nbsp;Muna Al-Razgan ,&nbsp;Yasser A. Ali ,&nbsp;Yogapriya J","doi":"10.1016/j.bspc.2025.107737","DOIUrl":"10.1016/j.bspc.2025.107737","url":null,"abstract":"<div><div>Autism Spectrum Disorder (ASD) is a brain disease that mostly affects communication ability, object identification, cognitive capacity, interpersonal skills, and speech comprehension. Its primary origin is genetics, and intervention and diagnosis in an early stage can alleviate the requirement for expensive medical approaches and lengthy tests for ASD patients. Neuroimaging methods can be used to distinguish the composite biomarkers within the ASD based on functional connectivity abnormalities. However, the identification of ASD adopts symptom-based conditions through medical examination. Traditional automated techniques based on extensive aggregated datasets are likely to attain undependable diagnostic classification. Hence, it is essential to establish an efficient ASD classifier system with a deep learning approach to overcome the limitations of the classical models. The innovation of the developed work lies in the implementation of a novel deep learning approach named ViT-ARDNet-LSTM for classifying ASD utilizing MRI images. This framework integrates the merits of ViT, adaptive residual densenets, and LSTM models for efficiently classifying ASD. The developed model rectifies some issues such as the requirement for effective and accurate ASD diagnosis, the problems of conventional models, and the complexities of validating the complex MRI images. Especially, the suggested work resolves the problems of variability in the MRI images, the requirement of robust feature extraction, and the importance of optimizing the model parameters. By offering an effective and novel solution for ASD classification, the suggested work has the potential to improve the diagnosis accuracy, minimize the diagnosis time, and improve patient care. In the developed work, at first, significant MRI images are accumulated from benchmark resources and it is offered as input to the preprocessing stage. In this phase, Contrast Limited Adaptive Histogram Equalization (CLAHE) and bilateral filtering mechanisms are introduced to pre-process the gathered MRI images. Next, the pre-processed images are offered to the ASD classification stage. In this phase, an implemented mechanism named Vision Transformer-based Adaptive Residual Densenet with Long Short Term Memory layer (ViT-ARDNet-LSTM) is utilized to classify the ASD. Moreover, the parameters in ViT-ARDNet-LSTM are optimized using the Modified Zebra Optimization Algorithm (MZOA) for enhancing the functionality of classification. Lastly, the experimental validations are carried out for the developed work. The experimental results displayed that the suggested model attained 94% accuracy, precision, specificity, and sensitivity values when considering the sigmoid activation function. Also, the developed model achieved 5% FPR values. These results elucidate that the designed ASD classification framework outperforms the conventional models and improves timely diagnosis.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107737"},"PeriodicalIF":4.9,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143534375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DAFFNet: A dual attention feature fusion network for classification of white blood cells
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-01 DOI: 10.1016/j.bspc.2025.107699
Yuzhuo Chen , Zetong Chen , Yunuo An, Chenyang Lu, Xu Qiao
The precise categorization of white blood cells (WBCs) is vital for diagnosing blood-related disorders. However, manual analysis in clinical settings is often time-consuming, labor-intensive, and prone to errors. Consequently, the use of Computer-Aided Diagnostic (CAD) techniques has become essential to assist hematologists in accurately classifying WBCs, improving diagnostic efficiency and reliability. Numerous studies have employed machine learning and deep learning techniques to achieve objective WBC classification, yet these studies have not fully utilized information on WBC images. Therefore, our motivation is to comprehensively use the morphological and high-level semantic information of WBC images to achieve accurate classification of WBC. In this study, we propose a novel dual-branch network, the Dual Attention Feature Fusion Network (DAFFNet), which first integrates the high-level semantic features with the morphological features of WBC to achieve more accurate classification. Specifically, we introduce a dual attention mechanism, enabling the model to comprehensively leverage both the channel-wise and spatially localized features of the image, enhancing its ability to capture crucial information for precise WBC classification. A Morphological Feature Extractor (MFE), comprising Morphological Attributes Predictor (MAP) and Morphological Attributes Encoder (MAE), is proposed to extract the morphological features of WBC. We also implement Deep-supervised Learning (DSL) and Semi-supervised Learning (SSL) training strategies for MAE to enhance its performance. Our proposed network framework achieves 98.77%, 91.30%, 98.36%, 99.71%, 98.45%, and 98.85% overall accuracy on the six public datasets PBC, LISC, Raabin-WBC, BCCD, LDWBC, and Labelled, respectively, demonstrating superior effectiveness compared to existing studies. On the BCCD dataset and Labelled dataset, the overall accuracy of our model exceeds the state-of-the-art model by 0.52% and 4.36%, respectively. The results indicate that the WBC classification combining high-level semantic features and low-level morphological features is of great significance, which lays the foundation for objective and accurate classification of WBC in microscopic blood cell images.
{"title":"DAFFNet: A dual attention feature fusion network for classification of white blood cells","authors":"Yuzhuo Chen ,&nbsp;Zetong Chen ,&nbsp;Yunuo An,&nbsp;Chenyang Lu,&nbsp;Xu Qiao","doi":"10.1016/j.bspc.2025.107699","DOIUrl":"10.1016/j.bspc.2025.107699","url":null,"abstract":"<div><div>The precise categorization of white blood cells (WBCs) is vital for diagnosing blood-related disorders. However, manual analysis in clinical settings is often time-consuming, labor-intensive, and prone to errors. Consequently, the use of Computer-Aided Diagnostic (CAD) techniques has become essential to assist hematologists in accurately classifying WBCs, improving diagnostic efficiency and reliability. Numerous studies have employed machine learning and deep learning techniques to achieve objective WBC classification, yet these studies have not fully utilized information on WBC images. Therefore, our motivation is to comprehensively use the morphological and high-level semantic information of WBC images to achieve accurate classification of WBC. In this study, we propose a novel dual-branch network, the Dual Attention Feature Fusion Network (DAFFNet), which first integrates the high-level semantic features with the morphological features of WBC to achieve more accurate classification. Specifically, we introduce a dual attention mechanism, enabling the model to comprehensively leverage both the channel-wise and spatially localized features of the image, enhancing its ability to capture crucial information for precise WBC classification. A Morphological Feature Extractor (MFE), comprising Morphological Attributes Predictor (MAP) and Morphological Attributes Encoder (MAE), is proposed to extract the morphological features of WBC. We also implement Deep-supervised Learning (DSL) and Semi-supervised Learning (SSL) training strategies for MAE to enhance its performance. Our proposed network framework achieves 98.77%, 91.30%, 98.36%, 99.71%, 98.45%, and 98.85% overall accuracy on the six public datasets PBC, LISC, Raabin-WBC, BCCD, LDWBC, and Labelled, respectively, demonstrating superior effectiveness compared to existing studies. On the BCCD dataset and Labelled dataset, the overall accuracy of our model exceeds the state-of-the-art model by 0.52% and 4.36%, respectively. The results indicate that the WBC classification combining high-level semantic features and low-level morphological features is of great significance, which lays the foundation for objective and accurate classification of WBC in microscopic blood cell images.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107699"},"PeriodicalIF":4.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143519876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
STL Net: A spatio-temporal multi-task learning network for Autism spectrum disorder identification
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-01 DOI: 10.1016/j.bspc.2025.107678
Yongjie Huang , Yanyan Zhang , Man Chen, Xiao Han, Zhisong Pan

Background and Objective:

The rich temporal and spatial information contained in Functional magnetic resonance imaging (fMRI) data is crucial for accurately identifying Autism spectrum disorder (ASD). Most current ASD identification methods capture temporal and spatial information in a serial manner, resulting in partial loss of information and sub-optimal outcomes. To solve this problem, we propose a heterogeneous spatio-temporal multi-task learning network (STL Net) for distinguishing between ASD patients and normal controls (NCs).

Methods:

Initially, we define two networks to extract temporal and spatial features respectively. Subsequently, the attention mechanism further capture useful features related to ASD in each network. To facilitate the interaction of spatio-temporal information, a spatio-temporal feature sharing module shares temporal and spatial features in parallel. Finally, the spatio-temporal features are aggregated for ASD identification.

Results:

We conduct experiments on five datasets from the Autism Brain Imaging Data Exchange, with the following results: Accuracy of 73.52%, 72.00%, 83.33%, 78.57% and 90.90%; Sensitivity of 66.66%, 70.00%, 80.00%, 88.88%, and 100.00%; and Specificity of 78.94%, 73.33%, 87.50%, 60.00% and 80.00%. The results show that our method outperforms other state-of-the-art ASD identification methods in Accuracy and exhibits significant competitiveness in Sensitivity and Specificity. Additionally, this method accurately identifies and points out the associated brain regions in ASD patients.

Conclusions:

This paper proposes a novel heterogeneous multi-task learning method, which offers a new perspective for more effective utilization of fMRI data in ASD identification. The proposed method can be translated into clinical applications to assist doctors in automated health screening for ASD.
{"title":"STL Net: A spatio-temporal multi-task learning network for Autism spectrum disorder identification","authors":"Yongjie Huang ,&nbsp;Yanyan Zhang ,&nbsp;Man Chen,&nbsp;Xiao Han,&nbsp;Zhisong Pan","doi":"10.1016/j.bspc.2025.107678","DOIUrl":"10.1016/j.bspc.2025.107678","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>The rich temporal and spatial information contained in Functional magnetic resonance imaging (fMRI) data is crucial for accurately identifying Autism spectrum disorder (ASD). Most current ASD identification methods capture temporal and spatial information in a serial manner, resulting in partial loss of information and sub-optimal outcomes. To solve this problem, we propose a heterogeneous spatio-temporal multi-task learning network (STL Net) for distinguishing between ASD patients and normal controls (NCs).</div></div><div><h3>Methods:</h3><div>Initially, we define two networks to extract temporal and spatial features respectively. Subsequently, the attention mechanism further capture useful features related to ASD in each network. To facilitate the interaction of spatio-temporal information, a spatio-temporal feature sharing module shares temporal and spatial features in parallel. Finally, the spatio-temporal features are aggregated for ASD identification.</div></div><div><h3>Results:</h3><div>We conduct experiments on five datasets from the Autism Brain Imaging Data Exchange, with the following results: Accuracy of 73.52%, 72.00%, 83.33%, 78.57% and 90.90%; Sensitivity of 66.66%, 70.00%, 80.00%, 88.88%, and 100.00%; and Specificity of 78.94%, 73.33%, 87.50%, 60.00% and 80.00%. The results show that our method outperforms other state-of-the-art ASD identification methods in Accuracy and exhibits significant competitiveness in Sensitivity and Specificity. Additionally, this method accurately identifies and points out the associated brain regions in ASD patients.</div></div><div><h3>Conclusions:</h3><div>This paper proposes a novel heterogeneous multi-task learning method, which offers a new perspective for more effective utilization of fMRI data in ASD identification. The proposed method can be translated into clinical applications to assist doctors in automated health screening for ASD.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107678"},"PeriodicalIF":4.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143519846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Infrared absorption spectroscopy-based non-invasive blood glucose monitoring technology: A comprehensive review
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-01 DOI: 10.1016/j.bspc.2025.107750
Taixiang Li , Quangui Wang , Ying An , Lin Guo , Linan Ren , Linghao Lei , Xianlai Chen
Diabetes, characterized by hyperglycemia, is an incurable metabolic disorder with an alarmingly high prevalence rate. Self-monitoring of blood glucose holds exceptional significance in diabetes management. However, traditional invasive blood glucose monitoring devices have imposed inconvenience and discomfort on patients. This has propelled research in non-invasive blood glucose monitoring into the forefront, offering substantial clinical utility. In this survey, we reviewed the major technologies of non-invasive blood glucose monitoring based on absorption spectroscopy, including physical methodologies, signal and data processing techniques, and the progress in commercialization. This review can serve as an introduction to the modeling principles of non-invasive blood glucose monitoring, or as a collection of technical application methods of non-invasive glucose monitoring.
{"title":"Infrared absorption spectroscopy-based non-invasive blood glucose monitoring technology: A comprehensive review","authors":"Taixiang Li ,&nbsp;Quangui Wang ,&nbsp;Ying An ,&nbsp;Lin Guo ,&nbsp;Linan Ren ,&nbsp;Linghao Lei ,&nbsp;Xianlai Chen","doi":"10.1016/j.bspc.2025.107750","DOIUrl":"10.1016/j.bspc.2025.107750","url":null,"abstract":"<div><div>Diabetes, characterized by hyperglycemia, is an incurable metabolic disorder with an alarmingly high prevalence rate. Self-monitoring of blood glucose holds exceptional significance in diabetes management. However, traditional invasive blood glucose monitoring devices have imposed inconvenience and discomfort on patients. This has propelled research in non-invasive blood glucose monitoring into the forefront, offering substantial clinical utility. In this survey, we reviewed the major technologies of non-invasive blood glucose monitoring based on absorption spectroscopy, including physical methodologies, signal and data processing techniques, and the progress in commercialization. This review can serve as an introduction to the modeling principles of non-invasive blood glucose monitoring, or as a collection of technical application methods of non-invasive glucose monitoring.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107750"},"PeriodicalIF":4.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143519843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized Alzheimer disorder classification with DACN-MFFN utilizing OBLDE-TDO enhanced deep neural network features
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2025-03-01 DOI: 10.1016/j.bspc.2025.107729
M. Karthiga , E. Suganya , S. Sountharrajan , J. Jeyalakshmi , Sindhu Ravindran , Shahrol Mohamaddan
Alzheimer’s disease (AD) is a condition that causes the progressive deterioration of the brain and has important consequences for society and healthcare. Therefore, it is crucial to diagnose the disease early and accurately in order to effectively manage it. This work introduces a new method for predicting AD by utilising advanced Deep Learning (DL) models and optimised strategies for extracting features. The Dual Attention-based Convolutional Network combined with Multilayer Feature Fusion Network (DACN-MFFN), optimised using the Opposition Based Learning Differential Evaluation combined with Tasmanian Devil Optimization (OBLDE-TDO) technique, exhibits outstanding performance in accurately categorising AD patients. The tests are performed using a publically accessible MRI dataset from Kaggle, implementing a 70:30 split for training and testing. The performance of the model is assessed using conventional measures such as accuracy, precision, recall, and F1 score, in addition to considering its computational complexity. The results demonstrate that the proposed model obtains an exceptional accuracy of 99.6% in predicting AD, outperforming the most advanced models currently available. Furthermore, the model exhibited exceptional precision, recall, and F1 score metrics, underscoring its effectiveness in differentiating between instances of AD and non-AD cases. The model demonstrated a notable success in minimising misunderstandings, as evidenced by its low False Negative Rate of 1%. In addition, our ablation investigation shown that the proposed model is very responsive to fine-tuning of hyperparameters, achieving optimal performance with certain learning rates and a variety of drop out rates and weight decay ratios. By doing meticulous optimisation, combinations that achieve a harmonious equilibrium between the performance of the model and its computational efficiency were discovered, thus proving its efficacy for diagnosing AD early and accurately.
{"title":"Optimized Alzheimer disorder classification with DACN-MFFN utilizing OBLDE-TDO enhanced deep neural network features","authors":"M. Karthiga ,&nbsp;E. Suganya ,&nbsp;S. Sountharrajan ,&nbsp;J. Jeyalakshmi ,&nbsp;Sindhu Ravindran ,&nbsp;Shahrol Mohamaddan","doi":"10.1016/j.bspc.2025.107729","DOIUrl":"10.1016/j.bspc.2025.107729","url":null,"abstract":"<div><div>Alzheimer’s disease (AD) is a condition that causes the progressive deterioration of the brain and has important consequences for society and healthcare. Therefore, it is crucial to diagnose the disease early and accurately in order to effectively manage it. This work introduces a new method for predicting AD by utilising advanced Deep Learning (DL) models and optimised strategies for extracting features. The Dual Attention-based Convolutional Network combined with Multilayer Feature Fusion Network (DACN-MFFN), optimised using the Opposition Based Learning Differential Evaluation combined with Tasmanian Devil Optimization (OBLDE-TDO) technique, exhibits outstanding performance in accurately categorising AD patients. The tests are performed using a publically accessible MRI dataset from Kaggle, implementing a 70:30 split for training and testing. The performance of the model is assessed using conventional measures such as accuracy, precision, recall, and F1 score, in addition to considering its computational complexity. The results demonstrate that the proposed model obtains an exceptional accuracy of 99.6% in predicting AD, outperforming the most advanced models currently available. Furthermore, the model exhibited exceptional precision, recall, and F1 score metrics, underscoring its effectiveness in differentiating between instances of AD and non-AD cases. The model demonstrated a notable success in minimising misunderstandings, as evidenced by its low False Negative Rate of 1%. In addition, our ablation investigation shown that the proposed model is very responsive to fine-tuning of hyperparameters, achieving optimal performance with certain learning rates and a variety of drop out rates and weight decay ratios. By doing meticulous optimisation, combinations that achieve a harmonious equilibrium between the performance of the model and its computational efficiency were discovered, thus proving its efficacy for diagnosing AD early and accurately.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"106 ","pages":"Article 107729"},"PeriodicalIF":4.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143519840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomedical Signal Processing and Control
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1