Pub Date : 2025-02-11DOI: 10.1016/j.bspc.2025.107658
Shengjian Hu , Weining Fang , Haifeng Bao , Tianlong Zhang
Detecting mental fatigue is crucial for preventing accidents in critical areas. This paper presents a non-contact multimodal mental fatigue detection method to improve convenience and accuracy. Our approach combines facial expression sequences with physiological signals from frequency-modulated continuous wave (FMCW) radar. We also developed a self-supervised learning-based multimodal framework that enhances detection accuracy. Additionally, we created a specialized dataset using the psychological AX-CPT paradigm and conducted comparative studies with similar public datasets. Experimental results show that our multimodal self-supervised learning strategy significantly improves non-contact mental fatigue detection accuracy, achieving an optimal accuracy of 0.918 on our self-build dataset and performing well on comparable public datasets.
{"title":"Non-contact detection of mental fatigue from facial expressions and heart signals: A self-supervised-based multimodal fusion method","authors":"Shengjian Hu , Weining Fang , Haifeng Bao , Tianlong Zhang","doi":"10.1016/j.bspc.2025.107658","DOIUrl":"10.1016/j.bspc.2025.107658","url":null,"abstract":"<div><div>Detecting mental fatigue is crucial for preventing accidents in critical areas. This paper presents a non-contact multimodal mental fatigue detection method to improve convenience and accuracy. Our approach combines facial expression sequences with physiological signals from frequency-modulated continuous wave (FMCW) radar. We also developed a self-supervised learning-based multimodal framework that enhances detection accuracy. Additionally, we created a specialized dataset using the psychological AX-CPT paradigm and conducted comparative studies with similar public datasets. Experimental results show that our multimodal self-supervised learning strategy significantly improves non-contact mental fatigue detection accuracy, achieving an optimal accuracy of 0.918 on our self-build dataset and performing well on comparable public datasets.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107658"},"PeriodicalIF":4.9,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143378467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wait times for patients in outpatient departments are closely associated with patient satisfaction and the calibre of treatment received. In addition, shortening the waiting time of patients can effectively alleviate their anxiety. Therefore, this paper employed machine learning algorithms for rational modelling to create predictions based on the parameters of waiting time, number of people in the line, and density of the waiting room for past examination processes in order to properly estimate the waiting time of patients. Initially, we provide a high-efficiency optimization called RDBKA and include the Double Adaptive Weight Strategy and Random Spare Strategy into the Black-Winged Kite Algorithm (BKA). Then, using RDBKA to optimize the kernel extreme learning machine’s parameters, the patients’ waiting time prediction model (RDBKA-KELM) is suggested. In the experimental part, benchmark function experiments and ablation experiments are designed to verify the optimisation performance of RDBKA through comparison tests with 12 peer algorithms. The results showed that RDBKA-KELM performed better in terms of prediction error and accuracy than the other seven peer models. Therefore, it is foreseeable that RDBKA-KELM will become a reliable and effective method for predicting patient waiting time, thus helping hospitals to use healthcare resources efficiently.
{"title":"An enhanced black-winged kite algorithm boosted machine learning prediction model for patients’ waiting time","authors":"Xiang Zhang , Keying Wu , Chao Zhang , Xianyang Shao , Huihui Shen , Ali Asghar Heidari , Congwei Chen , Huiling Chen , Zhihong Gao","doi":"10.1016/j.bspc.2024.107425","DOIUrl":"10.1016/j.bspc.2024.107425","url":null,"abstract":"<div><div>Wait times for patients in outpatient departments are closely associated with patient satisfaction and the calibre of treatment received. In addition, shortening the waiting time of patients can effectively alleviate their anxiety. Therefore, this paper employed machine learning algorithms for rational modelling to create predictions based on the parameters of waiting time, number of people in the line, and density of the waiting room for past examination processes in order to properly estimate the waiting time of patients. Initially, we provide a high-efficiency optimization called RDBKA and include the Double Adaptive Weight Strategy and Random Spare Strategy into the Black-Winged Kite Algorithm (BKA). Then, using RDBKA to optimize the kernel extreme learning machine’s parameters, the patients’ waiting time prediction model (RDBKA-KELM) is suggested. In the experimental part, benchmark function experiments and ablation experiments are designed to verify the optimisation performance of RDBKA through comparison tests with 12 peer algorithms. The results showed that RDBKA-KELM performed better in terms of prediction error and accuracy than the other seven peer models. Therefore, it is foreseeable that RDBKA-KELM will become a reliable and effective method for predicting patient waiting time, thus helping hospitals to use healthcare resources efficiently.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107425"},"PeriodicalIF":4.9,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143378462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-10DOI: 10.1016/j.bspc.2025.107565
C. Venkataiah , M. Chennakesavulu , Y. Mallikarjuna Rao , B. Janardhana Rao , G. Ramesh , J. Sofia Priya Dharshini , Manjula Jayamma
An effective deep learning model is recommended for detecting glaucoma. Here, the detection process contains three phases: image collection, segmentation, and detection. At first, the required images are collected from benchmark sources. Further, the collected images undergo optic cup and disc segmentation. Here, the segmentation is performed by Trans-MobileUnet with Novel Loss function (TMUnet-NL). The segmented image with minimal loss is given as input to the Attention-based Dilated Hybrid Network (ADHNet) for detection. It is a powerful solution for eye disease classification by combining the strengths of Dilated and attention-based VGG16 and DTCN models. In this ADHNet, the features from cup, disc, and raw images are extracted by Visual Geometry Group (VGG16) network. The features from the cup, disc, and whole images are fused and it is given to the Deep Temporal Convolution Network (DTCN) for Glaucoma detection. While compared with classical techniques, the recommended method shows an accuracy rate of 94%. In earlier stage, the accurate treatment of the eye disease can take some precautions from the sight loss. The significance of eye disease primarily lies in early detection to enhance the treatment outcomes and offer more reliable solutions in eye health management. By adopting the deep learning model, the segmentation and classification of eye diseases have the ability to make a better decision-making process from the clinical experts. Regular eye examination is conducted by clinical experts to improve eyesight to enhance the quality of day-to-day life.
{"title":"A novel eye disease segmentation and classification model using advanced deep learning network","authors":"C. Venkataiah , M. Chennakesavulu , Y. Mallikarjuna Rao , B. Janardhana Rao , G. Ramesh , J. Sofia Priya Dharshini , Manjula Jayamma","doi":"10.1016/j.bspc.2025.107565","DOIUrl":"10.1016/j.bspc.2025.107565","url":null,"abstract":"<div><div>An effective deep learning model is recommended for detecting glaucoma. Here, the detection process contains three phases: image collection, segmentation, and detection. At first, the required images are collected from benchmark sources. Further, the collected images undergo optic cup and disc segmentation. Here, the segmentation is performed by Trans-MobileUnet with Novel Loss function (TMUnet-NL). The segmented image with minimal loss is given as input to the Attention-based Dilated Hybrid Network (ADHNet) for detection. It is a powerful solution for eye disease classification by combining the strengths of Dilated and attention-based VGG16 and DTCN models. In this ADHNet, the features from cup, disc, and raw images are extracted by Visual Geometry Group (VGG16) network. The features from the cup, disc, and whole images are fused and it is given to the Deep Temporal Convolution Network (DTCN) for Glaucoma detection. While compared with classical techniques, the recommended method shows an accuracy rate of 94%. In earlier stage, the accurate treatment of the eye disease can take some precautions from the sight loss. The significance of eye disease primarily lies in early detection to enhance the treatment outcomes and offer more reliable solutions in eye health management. By adopting the deep learning model, the segmentation and classification of eye diseases have the ability to make a better decision-making process from the clinical experts. Regular eye examination is conducted by clinical experts to improve eyesight to enhance the quality of day-to-day life.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107565"},"PeriodicalIF":4.9,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143376749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-08DOI: 10.1016/j.bspc.2025.107584
Pradeep Kumar Das , S. Sreevatsav , Adyasha Sahu , Shah Arpan Hasmukh Mayuri , Pareshkumar Ramanbhai Sagar
Tuberculosis, often known as TB, is a chronic bacterial infection, which damages the lungs. TB is one of the top 10 main causes of mortality in the world. It is essential to make an accurate and prompt diagnosis of tuberculosis (TB). In this work, a novel Orthogonal Softmax Layer-based Tuberculosis Detection Convolutional Neural Network (OSLTBDNet) is developed by leveraging the merits of depthwise separable convolution, tunable hyperparameters, inverted residual bottleneck block, and orthogonal softmax layer (OSL)-based classification. OSL maintains the orthogonality among weight vectors to boost class discrimination capability; thus improving classification results. It reduces coadaptation between parameters by discarding several connections from the fully connected convolution layer, FCCL, hence simplify the optimization. Thus, leveraging these above-discussed salient features makes the proposed system more accurate and faster as well. The results of this experiment suggest that the proposed model is superior to other comparing models with the best 99.00%, and 98.17% accuracy in Kaggle and TBX11K datasets, respectively.
{"title":"OSLTBDNet: Orthogonal softmax layer-based tuberculosis detection network with small dataset","authors":"Pradeep Kumar Das , S. Sreevatsav , Adyasha Sahu , Shah Arpan Hasmukh Mayuri , Pareshkumar Ramanbhai Sagar","doi":"10.1016/j.bspc.2025.107584","DOIUrl":"10.1016/j.bspc.2025.107584","url":null,"abstract":"<div><div>Tuberculosis, often known as TB, is a chronic bacterial infection, which damages the lungs. TB is one of the top 10 main causes of mortality in the world. It is essential to make an accurate and prompt diagnosis of tuberculosis (TB). In this work, a novel Orthogonal Softmax Layer-based Tuberculosis Detection Convolutional Neural Network (OSLTBDNet) is developed by leveraging the merits of depthwise separable convolution, tunable hyperparameters, inverted residual bottleneck block, and orthogonal softmax layer (OSL)-based classification. OSL maintains the orthogonality among weight vectors to boost class discrimination capability; thus improving classification results. It reduces coadaptation between parameters by discarding several connections from the fully connected convolution layer, FCCL, hence simplify the optimization. Thus, leveraging these above-discussed salient features makes the proposed system more accurate and faster as well. The results of this experiment suggest that the proposed model is superior to other comparing models with the best 99.00%, and 98.17% accuracy in Kaggle and TBX11K datasets, respectively.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107584"},"PeriodicalIF":4.9,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143350992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-08DOI: 10.1016/j.bspc.2025.107554
Liu Deng , Pengrui Li , Haokai Zhang , Qingyuan Zheng , Shihong Liu , Xinmin Ding , Manqing Wang , Dongrui Gao
Steady-state visual evoked potentials (SSVEP) contributes to the brain’s interaction with external devices due to its advantages of high signal-to-noise ratio and information transmission rate. From the shape of the electroencephalogram (EEG) signals shows an irregular distribution, the current SSVEP deep learning methods have not represent the spatial topology patterns of EEG signals well, thus urgent need for further improvements in extracting spatial features. To this end, we propose a comprehensive network called TSMNet, combining a temporal feature extractor and a spatial topology converter for SSVEP classification. Specifically, the temporal feature extractor constructs a Temporal-conv block to filter the temporal-domain noise to capture pure high-order temporal-dependent representations. The spatial topology converter models the multi-channel representations in EEG signals and dynamically updates the shape of the topology, thus effectively capturing high-order spatial topology representations. We further develop a multigraph subspace module for multi-space mapping of the output of the spatial topology converter. This paper evaluates TSMNet on two SSVEP open datasets with time windows of 0.5s and 1.0s and compares with the state-of-the-art methods based on inter-subject and intra-subject experiments. The experimental results show that TSMNet has significant advantages in performance, it achieves an accuracy improvement of 2.04% and 1.47%. Attractively, this end-to-end framework not only eliminates the time-consuming manual feature engineering process but also speeds up the implementation of practical applications.
{"title":"TSMNet: A comprehensive network based on spatio-temporal representations for SSVEP classification","authors":"Liu Deng , Pengrui Li , Haokai Zhang , Qingyuan Zheng , Shihong Liu , Xinmin Ding , Manqing Wang , Dongrui Gao","doi":"10.1016/j.bspc.2025.107554","DOIUrl":"10.1016/j.bspc.2025.107554","url":null,"abstract":"<div><div>Steady-state visual evoked potentials (SSVEP) contributes to the brain’s interaction with external devices due to its advantages of high signal-to-noise ratio and information transmission rate. From the shape of the electroencephalogram (EEG) signals shows an irregular distribution, the current SSVEP deep learning methods have not represent the spatial topology patterns of EEG signals well, thus urgent need for further improvements in extracting spatial features. To this end, we propose a comprehensive network called TSMNet, combining a temporal feature extractor and a spatial topology converter for SSVEP classification. Specifically, the temporal feature extractor constructs a Temporal-conv block to filter the temporal-domain noise to capture pure high-order temporal-dependent representations. The spatial topology converter models the multi-channel representations in EEG signals and dynamically updates the shape of the topology, thus effectively capturing high-order spatial topology representations. We further develop a multigraph subspace module for multi-space mapping of the output of the spatial topology converter. This paper evaluates TSMNet on two SSVEP open datasets with time windows of 0.5s and 1.0s and compares with the state-of-the-art methods based on inter-subject and intra-subject experiments. The experimental results show that TSMNet has significant advantages in performance, it achieves an accuracy improvement of 2.04% and 1.47%. Attractively, this end-to-end framework not only eliminates the time-consuming manual feature engineering process but also speeds up the implementation of practical applications.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107554"},"PeriodicalIF":4.9,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143349598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-08DOI: 10.1016/j.bspc.2025.107523
Carmen Plaza-Seco , Kenneth E. Barner , Roberto Holgado-Cuadrado , Francisco M. Melgarejo-Meseguer , José-Luis Rojo-Álvarez , Manuel Blanco-Velasco
The development of wearable technologies for acquiring bioelectrical signals is expanding, increasing the demand for algorithms to analyze large clinical datasets. Accurate intra-beat waveform detection in ECG analysis is critical for assisting cardiologists in diagnosing cardiac diseases. We present a novel deep-learning detector capable of detecting specific intra-beat waves without the need for heartbeat identification, addressing a key limitation in current approaches. The model is trained to directly detect one of three key waveforms: P, QRS, or T. This approach is well-suited for applications such as identifying the T wave for repolarization alternans or ischemia. We employ a rigorous patient separation methodology and use a gold standard with expert manual labels from two public databases: QTDB and LUDB. The model utilizes a simple autoencoder (AE) architecture, offering interpretability insights by visualizing decision-making in a latent space. During the model design stage, the system achieves F1-scores of 0.93, 0.97, and 0.93 for P, QRS, and T wave identification. In real-world ambulatory environments, the in-line detector performs at 0.94, 0.98, and 0.96 for each wave. This work uses manifold learning for ECG intra-wave detection with simple, explainable models based on the applied machine learning principles. It outperforms state-of-the-art methods and provides valuable insights into the decision-making process, making it particularly well-suited for real-world applications, offering both accuracy and interpretability in ambulatory ECG analysis.
{"title":"Detection of intra-beat waves on ambulatory ECG using manifolds: An explainable deep learning approach","authors":"Carmen Plaza-Seco , Kenneth E. Barner , Roberto Holgado-Cuadrado , Francisco M. Melgarejo-Meseguer , José-Luis Rojo-Álvarez , Manuel Blanco-Velasco","doi":"10.1016/j.bspc.2025.107523","DOIUrl":"10.1016/j.bspc.2025.107523","url":null,"abstract":"<div><div>The development of wearable technologies for acquiring bioelectrical signals is expanding, increasing the demand for algorithms to analyze large clinical datasets. Accurate intra-beat waveform detection in ECG analysis is critical for assisting cardiologists in diagnosing cardiac diseases. We present a novel deep-learning detector capable of detecting specific intra-beat waves without the need for heartbeat identification, addressing a key limitation in current approaches. The model is trained to directly detect one of three key waveforms: P, QRS, or T. This approach is well-suited for applications such as identifying the T wave for repolarization alternans or ischemia. We employ a rigorous patient separation methodology and use a gold standard with expert manual labels from two public databases: QTDB and LUDB. The model utilizes a simple autoencoder (AE) architecture, offering interpretability insights by visualizing decision-making in a latent space. During the model design stage, the system achieves F1-scores of 0.93, 0.97, and 0.93 for P, QRS, and T wave identification. In real-world ambulatory environments, the in-line detector performs at 0.94, 0.98, and 0.96 for each wave. This work uses manifold learning for ECG intra-wave detection with simple, explainable models based on the <em>applied machine learning</em> principles. It outperforms state-of-the-art methods and provides valuable insights into the decision-making process, making it particularly well-suited for real-world applications, offering both accuracy and interpretability in ambulatory ECG analysis.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107523"},"PeriodicalIF":4.9,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143350875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-08DOI: 10.1016/j.bspc.2025.107652
Guilan Tu , Wuchao Li , Yongshun Lin , Zi Xu , Junjie He , Bangkang Fu , Ping Huang , Rongpin Wang , Yunsong Peng
Background and objective
The WHO grading of meningioma is closely linked to patient treatment and prognosis, making the development of an accurate deep learning model based on whole slide images (WSI) of significant clinical importance. Currently, deep learning-based multiple instance learning (MIL) often focuses on single-scale image information, which can result in information loss, or it fuses all features across different scales, potentially introducing noise from non-key regions in multi-scale images. To overcome the above limitations, we propose a novel MIL model.
Methods
In this study, we proposed a Graph Attention-guided Multi-scale fusion Mutiple-Instance Learning (GAMMIL) model. It consists of three channels: 10×, 40×, and fused. In the 40 × channel, it adaptively selects key patches, instead of all patches for information fusion, avoiding the interference of non-key regions and the waste of computing resources. Further, the graph attention-guided module extracts global information across multi-scale images for better information fusion. Finally, the fused features are used to predict the WHO grade of meningioma.
Results
A total of 428 meningioma WSIs were collected from Guizhou Provincial People’s Hospital. The experimental results demonstrate that the GAMMIL model achieved an AUC of 0.911, surpassing other state-of-the-art MIL models. Additionally, the GAMMIL model was validated on two publicly available datasets (TCGA-RCC and CAMELYON16) and a private multi-center clear cell renal cell carcinoma dataset, where it exhibited superior discriminative performance compared to other leading models.
Conclusions
The GAMMIL model effectively simulates the pathologist’s diagnostic process, showing strong classification performance across datasets. Its adaptive patch selection and multi-scale fusion offer valuable insights for cancer pathology.
{"title":"GAMMIL: A graph attention-guided multi-scale fusion multiple instance learning model for the WHO grading of meningioma in whole slide images","authors":"Guilan Tu , Wuchao Li , Yongshun Lin , Zi Xu , Junjie He , Bangkang Fu , Ping Huang , Rongpin Wang , Yunsong Peng","doi":"10.1016/j.bspc.2025.107652","DOIUrl":"10.1016/j.bspc.2025.107652","url":null,"abstract":"<div><h3>Background and objective</h3><div>The WHO grading of meningioma is closely linked to patient treatment and prognosis, making the development of an accurate deep learning model based on whole slide images (WSI) of significant clinical importance. Currently, deep learning-based multiple instance learning (MIL) often focuses on single-scale image information, which can result in information loss, or it fuses all features across different scales, potentially introducing noise from non-key regions in multi-scale images. To overcome the above limitations, we propose a novel MIL model.</div></div><div><h3>Methods</h3><div>In this study, we proposed a Graph Attention-guided Multi-scale fusion Mutiple-Instance Learning (GAMMIL) model. It consists of three channels: 10×, 40×, and fused. In the 40 × channel, it adaptively selects key patches, instead of all patches for information fusion, avoiding the interference of non-key regions and the waste of computing resources. Further, the graph attention-guided module extracts global information across multi-scale images for better information fusion. Finally, the fused features are used to predict the WHO grade of meningioma.</div></div><div><h3>Results</h3><div>A total of 428 meningioma WSIs were collected from Guizhou Provincial People’s Hospital. The experimental results demonstrate that the GAMMIL model achieved an AUC of 0.911, surpassing other state-of-the-art MIL models. Additionally, the GAMMIL model was validated on two publicly available datasets (TCGA-RCC and CAMELYON16) and a private multi-center clear cell renal cell carcinoma dataset, where it exhibited superior discriminative performance compared to other leading models.</div></div><div><h3>Conclusions</h3><div>The GAMMIL model effectively simulates the pathologist’s diagnostic process, showing strong classification performance across datasets. Its adaptive patch selection and multi-scale fusion offer valuable insights for cancer pathology.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107652"},"PeriodicalIF":4.9,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143350985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-08DOI: 10.1016/j.bspc.2025.107611
Jialuo He , Hao Tang , Yanbo Dai , Xionghuan Chen , Peiyang Hu , Hailin Cao , Lianyang Zhang
The ”valley of death” symbolizes a critical gap between basic and clinical research. Although significant efforts have been made in the medical field with the assistance of AI, most of these advancements remain at the laboratory research stage. There is still a lack of research aimed at bridging the gap between laboratory experiments and clinical applications. This paper focuses on solving this issue by proposing a novel regression analysis approach for measuring intra-abdominal pressure (IAP). We aim to minimize the gap between swine models and human models. Due to the invasive nature of the IAP measurement process and privacy concerns, human data are limited, which hampers the ability to train high-performance deep learning models. Leveraging the complete life cycle of a swine, from a healthy state to heightened IAP leading to mortality, allows us to simulate all the stages encountered regarding human IAP variations. Specifically, we employ contrastive learning to regulate and place the features in sequential order, followed by a Kullback–Leibler divergence-based domain adaptation technique to transfer the knowledge gained from the swine model to the human model. In the future, we will consider transferring this method to the prediction field. Our proposed method significantly reduces the induced mean absolute error (MAE) from 1.5537 to 0.3614 in comparison with the baseline models, demonstrating the efficacy of our approach.
{"title":"A supervised contrastive learning-based fine-tuning approach for translational intra-abdominal pressure prediction","authors":"Jialuo He , Hao Tang , Yanbo Dai , Xionghuan Chen , Peiyang Hu , Hailin Cao , Lianyang Zhang","doi":"10.1016/j.bspc.2025.107611","DOIUrl":"10.1016/j.bspc.2025.107611","url":null,"abstract":"<div><div>The ”valley of death” symbolizes a critical gap between basic and clinical research. Although significant efforts have been made in the medical field with the assistance of AI, most of these advancements remain at the laboratory research stage. There is still a lack of research aimed at bridging the gap between laboratory experiments and clinical applications. This paper focuses on solving this issue by proposing a novel regression analysis approach for measuring intra-abdominal pressure (IAP). We aim to minimize the gap between swine models and human models. Due to the invasive nature of the IAP measurement process and privacy concerns, human data are limited, which hampers the ability to train high-performance deep learning models. Leveraging the complete life cycle of a swine, from a healthy state to heightened IAP leading to mortality, allows us to simulate all the stages encountered regarding human IAP variations. Specifically, we employ contrastive learning to regulate and place the features in sequential order, followed by a Kullback–Leibler divergence-based domain adaptation technique to transfer the knowledge gained from the swine model to the human model. In the future, we will consider transferring this method to the prediction field. Our proposed method significantly reduces the induced mean absolute error (MAE) from 1.5537 to 0.3614 in comparison with the baseline models, demonstrating the efficacy of our approach.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107611"},"PeriodicalIF":4.9,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143350989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-08DOI: 10.1016/j.bspc.2025.107600
Xiaona Song , Zenglong Peng , Shuai Song , Vladimir Stojanovic
This paper focuses on the asynchronous interval type-2 fuzzy state estimation for switched nonlinear reaction–diffusion susceptible–infected–recovered (SIR) epidemic models with impulsive effects. Initially, based on the stage characteristics of epidemic outbreaks, impulsive switched reaction–diffusion neural networks are proposed to model SIR epidemics more comprehensively. Then, the investigated models are linearized by using the interval type-2 Takagi–Sugeno fuzzy method, which can handle the nonlinearity and uncertainty of the system well. Next, considering the phenomenon of asynchronous switching between the system state and the estimator one due to system identification and other factors, the asynchronous fuzzy state estimator with switching and impulsive features is designed to accurately estimate the state of the target systems. Finally, sufficient conditions for ensuring the state estimation error to be stable are derived, and the effectiveness of the theoretical results is validated by numerical examples.
{"title":"Asynchronous state estimation for switched nonlinear reaction–diffusion SIR epidemic models with impulsive effects","authors":"Xiaona Song , Zenglong Peng , Shuai Song , Vladimir Stojanovic","doi":"10.1016/j.bspc.2025.107600","DOIUrl":"10.1016/j.bspc.2025.107600","url":null,"abstract":"<div><div>This paper focuses on the asynchronous interval type-2 fuzzy state estimation for switched nonlinear reaction–diffusion susceptible–infected–recovered (SIR) epidemic models with impulsive effects. Initially, based on the stage characteristics of epidemic outbreaks, impulsive switched reaction–diffusion neural networks are proposed to model SIR epidemics more comprehensively. Then, the investigated models are linearized by using the interval type-2 Takagi–Sugeno fuzzy method, which can handle the nonlinearity and uncertainty of the system well. Next, considering the phenomenon of asynchronous switching between the system state and the estimator one due to system identification and other factors, the asynchronous fuzzy state estimator with switching and impulsive features is designed to accurately estimate the state of the target systems. Finally, sufficient conditions for ensuring the state estimation error to be stable are derived, and the effectiveness of the theoretical results is validated by numerical examples.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107600"},"PeriodicalIF":4.9,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143349597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-08DOI: 10.1016/j.bspc.2025.107547
Chang Liang , Lei Yang , Yuming Wang , Jinyang Zhang , Bing Zhang
Background:
Patient-specific lumbar spine models are crucial for enhancing diagnostic accuracy, preoperative planning, intraoperative navigation, and biomechanical analysis of the lumbar spine. However, the methods currently used for creating and analyzing these models primarily involve manual operations, which require significant anatomical expertise and often result in inefficiencies. To overcome these challenges, this study introduces a novel method for automating the creation and analysis of subject-specific lumbar spine models.
Methods:
This study utilizes deep learning algorithms and smoothing algorithms to accurately segment CT images and generate patient-specific three-dimensional (3D) lumbar masks. To ensure accuracy and continuity, vertebral surface models are then constructed and optimized, based on these 3D masks. Following that, model accuracy metrics are calculated accordingly. An automated modeling program is employed to construct structures such as intervertebral discs (IVD) and generate input files necessary for Finite Element (FE) analysis to simulate biomechanical behavior. The validity of the entire lumbar spine model produced using this method is verified by comparing the model with in vitro experimental data. Finally, the proposed method is applied to a patient-specific model of the degenerated lumbar spine to simulate its biomechanical response and changes.
Results:
In the test set, the neural network achieves an average Dice coefficient (DC) of 97.8%, demonstrating high segmentation accuracy. Moreover, the application of the smoothing algorithm reduces model noise substantially. The smoothed model exhibits an average Hausdorff distance (HD) of 3.53 mm and an average surface distance (ASD) of 0.51 mm, demonstrating high accuracy. The FE analysis results agree closely with in vitro experimental data, while the simulation results of the degradation lumbar model correspond with trends observed in existing literature.
{"title":"Development and validation of a human lumbar spine finite element model based on an automated process: Application to disc degeneration","authors":"Chang Liang , Lei Yang , Yuming Wang , Jinyang Zhang , Bing Zhang","doi":"10.1016/j.bspc.2025.107547","DOIUrl":"10.1016/j.bspc.2025.107547","url":null,"abstract":"<div><h3>Background:</h3><div>Patient-specific lumbar spine models are crucial for enhancing diagnostic accuracy, preoperative planning, intraoperative navigation, and biomechanical analysis of the lumbar spine. However, the methods currently used for creating and analyzing these models primarily involve manual operations, which require significant anatomical expertise and often result in inefficiencies. To overcome these challenges, this study introduces a novel method for automating the creation and analysis of subject-specific lumbar spine models.</div></div><div><h3>Methods:</h3><div>This study utilizes deep learning algorithms and smoothing algorithms to accurately segment CT images and generate patient-specific three-dimensional (3D) lumbar masks. To ensure accuracy and continuity, vertebral surface models are then constructed and optimized, based on these 3D masks. Following that, model accuracy metrics are calculated accordingly. An automated modeling program is employed to construct structures such as intervertebral discs (IVD) and generate input files necessary for Finite Element (FE) analysis to simulate biomechanical behavior. The validity of the entire lumbar spine model produced using this method is verified by comparing the model with in vitro experimental data. Finally, the proposed method is applied to a patient-specific model of the degenerated lumbar spine to simulate its biomechanical response and changes.</div></div><div><h3>Results:</h3><div>In the test set, the neural network achieves an average Dice coefficient (DC) of 97.8%, demonstrating high segmentation accuracy. Moreover, the application of the smoothing algorithm reduces model noise substantially. The smoothed model exhibits an average Hausdorff distance (HD) of 3.53 mm and an average surface distance (ASD) of 0.51 mm, demonstrating high accuracy. The FE analysis results agree closely with in vitro experimental data, while the simulation results of the degradation lumbar model correspond with trends observed in existing literature.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"105 ","pages":"Article 107547"},"PeriodicalIF":4.9,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143350984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}