Pub Date : 2025-01-01Epub Date: 2024-09-19DOI: 10.1007/s11517-024-03196-8
Farshid Hamtaei Pour Shirazi, Hossein Parsaei, Alireza Ashraf
Interpreting intramuscular electromyography (iEMG) signals for diagnosing and quantifying the severity of lumbosacral radiculopathy is challenging due to the subjective evaluation of signals. To address this limitation, a clinical decision support system (CDSS) was developed for the diagnosis and quantification of the severity of lumbosacral radiculopathy based on intramuscular electromyography (iEMG) signals. The CDSS uses the EMG interference pattern method (QEMG IP) to directly extract features from the iEMG signal and provide a quantitative expression of injury severity for each muscle and overall radiculopathy severity. From 126 time and frequency domain features, a set of five features, including the crest factor, mean absolute value, peak frequency, zero crossing count, and intensity, were selected. These features were derived from raw iEMG signals, empirical mode decomposition, and discrete wavelet transform, and the wrapper method was utilized to determine the most significant features. The CDSS was trained and tested on a dataset of 75 patients, achieving an accuracy of 93.3%, sensitivity of 93.3%, and specificity of 96.6%. The system shows promise in assisting physicians in diagnosing lumbosacral radiculopathy with high accuracy and consistency using iEMG data. The CDSS's objective and standardized diagnostic process, along with its potential to reduce the time and effort required by physicians to interpret EMG signals, makes it a potentially valuable tool for clinicians in the diagnosis and management of lumbosacral radiculopathy. Future work should focus on validating the system's performance in diverse clinical settings and patient populations.
{"title":"A clinical decision support system for diagnosis and severity quantification of lumbosacral radiculopathy using intramuscular electromyography signals.","authors":"Farshid Hamtaei Pour Shirazi, Hossein Parsaei, Alireza Ashraf","doi":"10.1007/s11517-024-03196-8","DOIUrl":"10.1007/s11517-024-03196-8","url":null,"abstract":"<p><p>Interpreting intramuscular electromyography (iEMG) signals for diagnosing and quantifying the severity of lumbosacral radiculopathy is challenging due to the subjective evaluation of signals. To address this limitation, a clinical decision support system (CDSS) was developed for the diagnosis and quantification of the severity of lumbosacral radiculopathy based on intramuscular electromyography (iEMG) signals. The CDSS uses the EMG interference pattern method (QEMG IP) to directly extract features from the iEMG signal and provide a quantitative expression of injury severity for each muscle and overall radiculopathy severity. From 126 time and frequency domain features, a set of five features, including the crest factor, mean absolute value, peak frequency, zero crossing count, and intensity, were selected. These features were derived from raw iEMG signals, empirical mode decomposition, and discrete wavelet transform, and the wrapper method was utilized to determine the most significant features. The CDSS was trained and tested on a dataset of 75 patients, achieving an accuracy of 93.3%, sensitivity of 93.3%, and specificity of 96.6%. The system shows promise in assisting physicians in diagnosing lumbosacral radiculopathy with high accuracy and consistency using iEMG data. The CDSS's objective and standardized diagnostic process, along with its potential to reduce the time and effort required by physicians to interpret EMG signals, makes it a potentially valuable tool for clinicians in the diagnosis and management of lumbosacral radiculopathy. Future work should focus on validating the system's performance in diverse clinical settings and patient populations.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"239-249"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142299625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-08-06DOI: 10.1007/s11517-024-03175-z
Francesca Righetti, Giulia Rubiu, Marco Penso, Sara Moccia, Maria L Carerj, Mauro Pepi, Gianluca Pontone, Enrico G Caiani
This work proposes a convolutional neural network (CNN) that utilizes different combinations of parametric images computed from cine cardiac magnetic resonance (CMR) images, to classify each slice for possible myocardial scar tissue presence. The CNN performance comparison in respect to expert interpretation of CMR with late gadolinium enhancement (LGE) images, used as ground truth (GT), was conducted on 206 patients (158 scar, 48 control) from Centro Cardiologico Monzino (Milan, Italy) at both slice- and patient-levels. Left ventricle dynamic features were extracted in non-enhanced cine images using parametric images based on both Fourier and monogenic signal analyses. The CNN, fed with cine images and Fourier-based parametric images, achieved an area under the ROC curve of 0.86 (accuracy 0.79, F1 0.81, sensitivity 0.9, specificity 0.65, and negative (NPV) and positive (PPV) predictive values 0.83 and 0.77, respectively), for individual slice classification. Remarkably, it exhibited 1.0 prediction accuracy (F1 0.98, sensitivity 1.0, specificity 0.9, NPV 1.0, and PPV 0.97) in patient classification as a control or pathologic. The proposed approach represents a first step towards scar detection in contrast-free CMR images. Patient-level results suggest its preliminary potential as a screening tool to guide decisions regarding LGE-CMR prescription, particularly in cases where indication is uncertain.
{"title":"Deep learning approaches for the detection of scar presence from cine cardiac magnetic resonance adding derived parametric images.","authors":"Francesca Righetti, Giulia Rubiu, Marco Penso, Sara Moccia, Maria L Carerj, Mauro Pepi, Gianluca Pontone, Enrico G Caiani","doi":"10.1007/s11517-024-03175-z","DOIUrl":"10.1007/s11517-024-03175-z","url":null,"abstract":"<p><p>This work proposes a convolutional neural network (CNN) that utilizes different combinations of parametric images computed from cine cardiac magnetic resonance (CMR) images, to classify each slice for possible myocardial scar tissue presence. The CNN performance comparison in respect to expert interpretation of CMR with late gadolinium enhancement (LGE) images, used as ground truth (GT), was conducted on 206 patients (158 scar, 48 control) from Centro Cardiologico Monzino (Milan, Italy) at both slice- and patient-levels. Left ventricle dynamic features were extracted in non-enhanced cine images using parametric images based on both Fourier and monogenic signal analyses. The CNN, fed with cine images and Fourier-based parametric images, achieved an area under the ROC curve of 0.86 (accuracy 0.79, F1 0.81, sensitivity 0.9, specificity 0.65, and negative (NPV) and positive (PPV) predictive values 0.83 and 0.77, respectively), for individual slice classification. Remarkably, it exhibited 1.0 prediction accuracy (F1 0.98, sensitivity 1.0, specificity 0.9, NPV 1.0, and PPV 0.97) in patient classification as a control or pathologic. The proposed approach represents a first step towards scar detection in contrast-free CMR images. Patient-level results suggest its preliminary potential as a screening tool to guide decisions regarding LGE-CMR prescription, particularly in cases where indication is uncertain.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"59-73"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11695392/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141894736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-08-16DOI: 10.1007/s11517-024-03181-1
Mohammad Humayun Kabir, Marek Reformat, Sarah Southon Hryniuk, Kyle Stampe, Edmond Lou
The magnetically controlled growing rod technique is an effective surgical treatment for children who have early-onset scoliosis. The length of the instrumented growing rods is adjusted regularly to compensate for the normal growth of these patients. Manual measurement of rod length on posteroanterior spine radiographs is subjective and time-consuming. A machine learning (ML) system using a deep learning approach was developed to automatically measure the adjusted rod length. Three ML models-rod model, 58 mm model, and head-piece model-were developed to extract the rod length from radiographs. Three-hundred and eighty-seven radiographs were used for model development, and 60 radiographs with 118 rods were separated for final testing. The average precision (AP), the mean absolute difference (MAD) ± standard deviation (SD), and the inter-method correlation coefficient (ICC[2,1]) between the manual and artificial intelligence (AI) adjustment measurements were used to evaluate the developed method. The AP of the 3 models were 67.6%, 94.8%, and 86.3%, respectively. The MAD ± SD of the rod length change was 0.98 ± 0.88 mm, and the ICC[2,1] was 0.90. The average time to output a single rod measurement was 6.1 s. The developed AI provided an accurate and reliable method to detect the rod length automatically.
{"title":"Validity of machine learning algorithms for automatically extract growing rod length on radiographs in children with early-onset scoliosis.","authors":"Mohammad Humayun Kabir, Marek Reformat, Sarah Southon Hryniuk, Kyle Stampe, Edmond Lou","doi":"10.1007/s11517-024-03181-1","DOIUrl":"10.1007/s11517-024-03181-1","url":null,"abstract":"<p><p>The magnetically controlled growing rod technique is an effective surgical treatment for children who have early-onset scoliosis. The length of the instrumented growing rods is adjusted regularly to compensate for the normal growth of these patients. Manual measurement of rod length on posteroanterior spine radiographs is subjective and time-consuming. A machine learning (ML) system using a deep learning approach was developed to automatically measure the adjusted rod length. Three ML models-rod model, 58 mm model, and head-piece model-were developed to extract the rod length from radiographs. Three-hundred and eighty-seven radiographs were used for model development, and 60 radiographs with 118 rods were separated for final testing. The average precision (AP), the mean absolute difference (MAD) ± standard deviation (SD), and the inter-method correlation coefficient (ICC<sub>[2,1]</sub>) between the manual and artificial intelligence (AI) adjustment measurements were used to evaluate the developed method. The AP of the 3 models were 67.6%, 94.8%, and 86.3%, respectively. The MAD ± SD of the rod length change was 0.98 ± 0.88 mm, and the ICC<sub>[2,1]</sub> was 0.90. The average time to output a single rod measurement was 6.1 s. The developed AI provided an accurate and reliable method to detect the rod length automatically.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"101-110"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141996790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-08-05DOI: 10.1007/s11517-024-03173-1
Xuankai Yang, Jing Sun, Hongbo Yang, Tao Guo, Jiahua Pan, Weilian Wang
Heart sound signals are vital for the machine-assisted detection of congenital heart disease. However, the performance of diagnostic results is limited by noise during heart sound acquisition. A limitation of existing noise reduction schemes is that the pathological components of the signal are weak, which have the potential to be filtered out with the noise. In this research, a novel approach for classifying heart sounds based on median ensemble empirical mode decomposition (MEEMD), Hurst analysis, improved threshold denoising, and neural networks are presented. In decomposing the heart sound signal into several intrinsic mode functions (IMFs), mode mixing and mode splitting can be effectively suppressed by MEEMD. Hurst analysis is adopted for identifying the noisy content of IMFs. Then, the noise-dominated IMFs are denoised by an improved threshold function. Finally, the noise reduction signal is generated by reconstructing the processed components and the other components. A database of 5000 heart sounds from congenital heart disease and normal volunteers was constructed. The Mel spectral coefficients of the denoised signals were used as input vectors to the convolutional neural network for classification to verify the effectiveness of the preprocessing algorithm. An accuracy of 93.8%, a specificity of 93.1%, and a sensitivity of 94.6% were achieved for classifying the normal cases from abnormal one.
心音信号对于机器辅助检测先天性心脏病至关重要。然而,诊断结果的性能受到心音采集过程中噪音的限制。现有降噪方案的局限性在于信号中的病理成分较弱,有可能被噪声过滤掉。本研究提出了一种基于中值集合经验模式分解(MEEMD)、赫斯特分析、改进阈值去噪和神经网络的新型心音分类方法。在将心音信号分解为多个本征模式函数(IMF)时,MEEMD 可以有效抑制模式混合和模式分裂。采用 Hurst 分析来识别 IMF 的噪声内容。然后,通过改进的阈值函数对噪声占主导地位的 IMF 进行去噪处理。最后,通过重建处理过的分量和其他分量来生成降噪信号。我们建立了一个包含 5000 个先天性心脏病患者和正常志愿者心音的数据库。去噪信号的梅尔频谱系数被用作卷积神经网络分类的输入向量,以验证预处理算法的有效性。对正常与异常病例进行分类的准确率为 93.8%,特异性为 93.1%,灵敏度为 94.6%。
{"title":"The heart sound classification of congenital heart disease by using median EEMD-Hurst and threshold denoising method.","authors":"Xuankai Yang, Jing Sun, Hongbo Yang, Tao Guo, Jiahua Pan, Weilian Wang","doi":"10.1007/s11517-024-03173-1","DOIUrl":"10.1007/s11517-024-03173-1","url":null,"abstract":"<p><p>Heart sound signals are vital for the machine-assisted detection of congenital heart disease. However, the performance of diagnostic results is limited by noise during heart sound acquisition. A limitation of existing noise reduction schemes is that the pathological components of the signal are weak, which have the potential to be filtered out with the noise. In this research, a novel approach for classifying heart sounds based on median ensemble empirical mode decomposition (MEEMD), Hurst analysis, improved threshold denoising, and neural networks are presented. In decomposing the heart sound signal into several intrinsic mode functions (IMFs), mode mixing and mode splitting can be effectively suppressed by MEEMD. Hurst analysis is adopted for identifying the noisy content of IMFs. Then, the noise-dominated IMFs are denoised by an improved threshold function. Finally, the noise reduction signal is generated by reconstructing the processed components and the other components. A database of 5000 heart sounds from congenital heart disease and normal volunteers was constructed. The Mel spectral coefficients of the denoised signals were used as input vectors to the convolutional neural network for classification to verify the effectiveness of the preprocessing algorithm. An accuracy of 93.8%, a specificity of 93.1%, and a sensitivity of 94.6% were achieved for classifying the normal cases from abnormal one.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"29-44"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141890713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-08-06DOI: 10.1007/s11517-024-03178-w
Kaiwei Yu, Jiafa Chen, Xian Ding, Dawei Zhang
Cognition is crucial to brain function, and accurately classifying cognitive load is essential for understanding the psychological processes across tasks. This paper innovatively combines functional near-infrared spectroscopy (fNIRS) with eye tracking technology to delve into the classification of cognitive load at the neurocognitive level. This integration overcomes the limitations of a single modality, addressing challenges such as feature selection, high dimensionality, and insufficient sample capacity. We employ fNIRS-eye tracking technology to collect neural activity and eye tracking data during various cognitive tasks, followed by preprocessing. Using the maximum relevance minimum redundancy algorithm, we extract the most relevant features and evaluate their impact on the classification task. We evaluate the classification performance by building models (naive Bayes, support vector machine, K-nearest neighbors, and random forest) and employing cross-validation. The results demonstrate the effectiveness of fNIRS-eye tracking, the maximum relevance minimum redundancy algorithm, and machine learning techniques in discriminating cognitive load levels. This study emphasizes the impact of the number of features on performance, highlighting the need for an optimal feature set to improve accuracy. These findings advance our understanding of neuroscientific features related to cognitive load, propelling neural psychology research to deeper levels and holding significant implications for future cognitive science.
{"title":"Exploring cognitive load through neuropsychological features: an analysis using fNIRS-eye tracking.","authors":"Kaiwei Yu, Jiafa Chen, Xian Ding, Dawei Zhang","doi":"10.1007/s11517-024-03178-w","DOIUrl":"10.1007/s11517-024-03178-w","url":null,"abstract":"<p><p>Cognition is crucial to brain function, and accurately classifying cognitive load is essential for understanding the psychological processes across tasks. This paper innovatively combines functional near-infrared spectroscopy (fNIRS) with eye tracking technology to delve into the classification of cognitive load at the neurocognitive level. This integration overcomes the limitations of a single modality, addressing challenges such as feature selection, high dimensionality, and insufficient sample capacity. We employ fNIRS-eye tracking technology to collect neural activity and eye tracking data during various cognitive tasks, followed by preprocessing. Using the maximum relevance minimum redundancy algorithm, we extract the most relevant features and evaluate their impact on the classification task. We evaluate the classification performance by building models (naive Bayes, support vector machine, K-nearest neighbors, and random forest) and employing cross-validation. The results demonstrate the effectiveness of fNIRS-eye tracking, the maximum relevance minimum redundancy algorithm, and machine learning techniques in discriminating cognitive load levels. This study emphasizes the impact of the number of features on performance, highlighting the need for an optimal feature set to improve accuracy. These findings advance our understanding of neuroscientific features related to cognitive load, propelling neural psychology research to deeper levels and holding significant implications for future cognitive science.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"45-57"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141898772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01Epub Date: 2024-08-14DOI: 10.1007/s11517-024-03179-9
Mani Farajzadeh Zanjani, Majid Ghoshuni
Working memory plays an important role in cognitive science and is a basic process for learning. While working memory is limited in regard to capacity and duration, different cognitive tasks are designed to overcome these difficulties. This study investigated information flow during a novel visual working memory task in which participants respond to exaggerated and normal pictures. Ten healthy men (mean age 28.5 ± 4.57 years) participated in two stages of the encoding and retrieval tasks. The electroencephalogram (EEG) signals are recorded. Moreover, the adaptive directed transfer function (ADTF) method is used as a computational tool to investigate the dynamic process of visual working memory retrieval on the extracted event-related potentials (ERPs) from the EEG signal. Network connectivity and P300 sub-components (P3a, P3b, and LPC) are also extracted during visual working memory retrieval. Then, the nonparametric Wilcoxon test and five classifiers are applied to network properties for features selection and classification between exaggerated-old and normal-old pictures. The Z-values of Ge is more distinctive rather than other network properties. In terms of the machine learning approach, the accuracy, F1-score, and specificity of the k-nearest neighbors (KNN), classifiers are 81%, 77%, and 81%, respectively. KNN classifier ranked first compared with other classifiers. Furthermore, the results of in-degree/out-degree matrices show that the information flows continuously in the right hemisphere during the retrieval of exaggerated pictures, from P3a to P3b. During the retrieval of visual working memory, the networks associated with attentional processes show greater activation for exaggerated pictures compared to normal pictures. This suggests that the exaggerated pictures may have captured more attention and thus required greater cognitive resources for retrieval.
{"title":"Directional information flow analysis in memory retrieval: a comparison between exaggerated and normal pictures.","authors":"Mani Farajzadeh Zanjani, Majid Ghoshuni","doi":"10.1007/s11517-024-03179-9","DOIUrl":"10.1007/s11517-024-03179-9","url":null,"abstract":"<p><p>Working memory plays an important role in cognitive science and is a basic process for learning. While working memory is limited in regard to capacity and duration, different cognitive tasks are designed to overcome these difficulties. This study investigated information flow during a novel visual working memory task in which participants respond to exaggerated and normal pictures. Ten healthy men (mean age 28.5 ± 4.57 years) participated in two stages of the encoding and retrieval tasks. The electroencephalogram (EEG) signals are recorded. Moreover, the adaptive directed transfer function (ADTF) method is used as a computational tool to investigate the dynamic process of visual working memory retrieval on the extracted event-related potentials (ERPs) from the EEG signal. Network connectivity and P300 sub-components (P3a, P3b, and LPC) are also extracted during visual working memory retrieval. Then, the nonparametric Wilcoxon test and five classifiers are applied to network properties for features selection and classification between exaggerated-old and normal-old pictures. The Z-values of Ge is more distinctive rather than other network properties. In terms of the machine learning approach, the accuracy, F1-score, and specificity of the k-nearest neighbors (KNN), classifiers are 81%, 77%, and 81%, respectively. KNN classifier ranked first compared with other classifiers. Furthermore, the results of in-degree/out-degree matrices show that the information flows continuously in the right hemisphere during the retrieval of exaggerated pictures, from P3a to P3b. During the retrieval of visual working memory, the networks associated with attentional processes show greater activation for exaggerated pictures compared to normal pictures. This suggests that the exaggerated pictures may have captured more attention and thus required greater cognitive resources for retrieval.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"89-100"},"PeriodicalIF":2.6,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Finite element human body models (HBMs) are the primary method for predicting human biological responses in vehicle collisions, especially personalized HBMs that allow accounting for diverse populations. Yet, creating personalized HBMs from a single image is a challenging task. This study addresses this challenge by providing a framework for HBM personalization, starting from a single image used to estimate the subject's skin point cloud, the skeletal point cloud, and the relative positions of the skeletons. Personalized HBMs were created by morphing the baseline HBM accounting skin and skeleton point clouds using a point cloud registration-based mesh morphing method. Using this framework, eight personalized HBMs with various biological characteristics (e.g., sex, height, and weight) were created, with comparable element quality to the baseline HBM. The mean geometric errors of the personalized FEMs generated by the framework are less than 7 mm, which was found to be acceptable based on biomechanical response evaluations conducted in this study.
{"title":"A fast-modeling framework for personalized human body models based on a single image.","authors":"Qiuqi Yuan, Zhi Xiao, Xiaoming Zhu, Bin Li, Jingzhou Hu, Yunfei Niu, Shiwei Xu","doi":"10.1007/s11517-024-03267-w","DOIUrl":"https://doi.org/10.1007/s11517-024-03267-w","url":null,"abstract":"<p><p>Finite element human body models (HBMs) are the primary method for predicting human biological responses in vehicle collisions, especially personalized HBMs that allow accounting for diverse populations. Yet, creating personalized HBMs from a single image is a challenging task. This study addresses this challenge by providing a framework for HBM personalization, starting from a single image used to estimate the subject's skin point cloud, the skeletal point cloud, and the relative positions of the skeletons. Personalized HBMs were created by morphing the baseline HBM accounting skin and skeleton point clouds using a point cloud registration-based mesh morphing method. Using this framework, eight personalized HBMs with various biological characteristics (e.g., sex, height, and weight) were created, with comparable element quality to the baseline HBM. The mean geometric errors of the personalized FEMs generated by the framework are less than 7 mm, which was found to be acceptable based on biomechanical response evaluations conducted in this study.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142911104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study focuses on improving the performance of steady-state visual evoked potential (SSVEP) in brain-computer interfaces (BCIs) for robotic control systems. The challenge lies in effectively reducing the impact of artifacts on raw data to enhance the performance both in quality and reliability. The proposed MVMD-MSI algorithm combines the advantages of multivariate variational mode decomposition (MVMD) and multivariate synchronization index (MSI). Compared to widely used algorithms, the novelty of this method is its capability of decomposing nonlinear and non-stationary EEG signals into intrinsic mode functions (IMF) across different frequency bands with the best center frequency and bandwidth. Therefore, SSVEP decoding performance can be improved by this method, and the effectiveness of MVMD-MSI is evaluated by the robot with 6 degrees-of-freedom. Offline experiments were conducted to optimize the algorithm's parameters, resulting in significant improvements. Additionally, the algorithm showed good performance even with fewer channels and shorter data lengths. In online experiments, the algorithm achieved an average accuracy of 98.31% at 1.8 s, confirming its feasibility and effectiveness for real-time SSVEP BCI-based robotic arm applications. The MVMD-MSI algorithm, as proposed, represents a significant advancement in SSVEP analysis for robotic control systems. It enhances decoding performance and shows promise for practical application in this field.
{"title":"Performance investigation of MVMD-MSI algorithm in frequency recognition for SSVEP-based brain-computer interface and its application in robotic arm control.","authors":"Rongrong Fu, Shaoxiong Niu, Xiaolei Feng, Ye Shi, Chengcheng Jia, Jing Zhao, Guilin Wen","doi":"10.1007/s11517-024-03236-3","DOIUrl":"https://doi.org/10.1007/s11517-024-03236-3","url":null,"abstract":"<p><p>This study focuses on improving the performance of steady-state visual evoked potential (SSVEP) in brain-computer interfaces (BCIs) for robotic control systems. The challenge lies in effectively reducing the impact of artifacts on raw data to enhance the performance both in quality and reliability. The proposed MVMD-MSI algorithm combines the advantages of multivariate variational mode decomposition (MVMD) and multivariate synchronization index (MSI). Compared to widely used algorithms, the novelty of this method is its capability of decomposing nonlinear and non-stationary EEG signals into intrinsic mode functions (IMF) across different frequency bands with the best center frequency and bandwidth. Therefore, SSVEP decoding performance can be improved by this method, and the effectiveness of MVMD-MSI is evaluated by the robot with 6 degrees-of-freedom. Offline experiments were conducted to optimize the algorithm's parameters, resulting in significant improvements. Additionally, the algorithm showed good performance even with fewer channels and shorter data lengths. In online experiments, the algorithm achieved an average accuracy of 98.31% at 1.8 s, confirming its feasibility and effectiveness for real-time SSVEP BCI-based robotic arm applications. The MVMD-MSI algorithm, as proposed, represents a significant advancement in SSVEP analysis for robotic control systems. It enhances decoding performance and shows promise for practical application in this field.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-26DOI: 10.1007/s11517-024-03270-1
Rashmita Chatterjee, Zahra Moussavi
Spatial impairment characterizes Alzheimer's disease (AD) from its earliest stages. We present the design and preliminary evaluation of "Barn Ruins," a serious virtual reality (VR) wayfinding game for early-stage AD. Barn Ruins is tailored to the cognitive abilities of this population, featuring simple controls and error-based scoring system. Ten younger adults, ten cognitively healthy older adults, and ten age-matched individuals with AD participated in this study. They underwent cognitive assessments using the Montreal Cognitive Assessment (MoCA) and the Montgomery-Åsberg Depression Rating Scale (MADRS) before gameplay. The game involves navigating a virtual environment to find a target room, with increasing levels of difficulty. This study aimed to confirm the cognitive sensitivity of the Barn Ruins' spatial learning score by studying its relationship with Montreal Cognitive Assessment (MoCA) scores. MoCA scores and spatial learning scores had a correlation coefficient of 0.755 (p < 0.001). Logistic regression further revealed that higher spatial learning scores significantly predicted lower odds of cognitive impairment (OR = 0.495, 95% CI [0.274, 0.746], p < 0.005). The initial results suggest that the game is effective in differentiating performance among participant groups. This research demonstrates the potential of the Barn Ruins game as an innovative tool for assessing spatial navigation in AD, highlighting areas for future validation and investigation as a training tool.
{"title":"Evaluation of a cognition-sensitive spatial virtual reality game for Alzheimer's disease.","authors":"Rashmita Chatterjee, Zahra Moussavi","doi":"10.1007/s11517-024-03270-1","DOIUrl":"https://doi.org/10.1007/s11517-024-03270-1","url":null,"abstract":"<p><p>Spatial impairment characterizes Alzheimer's disease (AD) from its earliest stages. We present the design and preliminary evaluation of \"Barn Ruins,\" a serious virtual reality (VR) wayfinding game for early-stage AD. Barn Ruins is tailored to the cognitive abilities of this population, featuring simple controls and error-based scoring system. Ten younger adults, ten cognitively healthy older adults, and ten age-matched individuals with AD participated in this study. They underwent cognitive assessments using the Montreal Cognitive Assessment (MoCA) and the Montgomery-Åsberg Depression Rating Scale (MADRS) before gameplay. The game involves navigating a virtual environment to find a target room, with increasing levels of difficulty. This study aimed to confirm the cognitive sensitivity of the Barn Ruins' spatial learning score by studying its relationship with Montreal Cognitive Assessment (MoCA) scores. MoCA scores and spatial learning scores had a correlation coefficient of 0.755 (p < 0.001). Logistic regression further revealed that higher spatial learning scores significantly predicted lower odds of cognitive impairment (OR = 0.495, 95% CI [0.274, 0.746], p < 0.005). The initial results suggest that the game is effective in differentiating performance among participant groups. This research demonstrates the potential of the Barn Ruins game as an innovative tool for assessing spatial navigation in AD, highlighting areas for future validation and investigation as a training tool.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-21DOI: 10.1007/s11517-024-03263-0
Jiajun Feng, Yuqian Huang, Zhenbin Hu, Junjie Guo
The objective of this study is to investigate the efficacy of the semantic segmentation model in predicting cardiothoracic ratio (CTR) and heart enlargement and compare its consistency with the reference standard. A total of 650 consecutive chest radiographs from our center and 756 public datasets were retrospectively included to develop a segmentation model. Three semantic segmentation models were used to segment the heart and lungs. A soft voting integration method was used to improve the segmentation accuracy and measure CTR automatically. Bland-Altman and Pearson's correlation analyses were used to compare the consistency and correlation between CTR automated measurements and reference standards. CTR automated measurements were compared with reference standard using the Wilcoxon signed-rank test. The diagnostic efficacy of the model for heart enlargement was evaluated using the AUC. The soft voting integration model was strongly correlated (r = 0.98, P < 0.001) and consistent (average standard deviation of 0.0048 cm/s) with the reference standard. No statistical difference between CTR automated measurement and reference standard in healthy subjects, pneumothorax, pleural effusion, and lung mass patients (P > 0.05). In the external test data, the accuracy, sensitivity, specificity, and AUC in determining heart enlargement were 96.0%, 79.5%, 99.1%, and 0.988, respectively. The deep learning method was calculated faster per chest radiograph than the average time manually calculated by the radiologist (about 2 s vs 25.75 ± 4.35 s, respectively, P < 0.001). This study provides a semantic segmentation integration model of chest radiographs to measure CTR and determine heart enlargement with chest structure changes due to different chest diseases effectively, faster, and accurately. The development of the automated segmentation integration model is helpful in improving the consistency of CTR measurement, reducing the workload of radiologists, and improving their work efficiency.
本研究的目的是探讨语义分割模型在预测心胸比(CTR)和心脏扩张方面的有效性,并比较其与参考标准的一致性。我们回顾性地纳入了来自本中心的650张连续胸片和756个公共数据集,以建立一个分割模型。使用三种语义分割模型对心脏和肺进行分割。采用软投票积分法提高分割精度,自动测量点击率。使用Bland-Altman和Pearson相关分析来比较CTR自动测量值与参考标准之间的一致性和相关性。CTR自动测量值与参考标准值采用Wilcoxon符号秩检验进行比较。采用AUC评价模型对心脏增大的诊断效果。软投票积分模型与强相关(r = 0.98, P 0.05)。在外部检测资料中,测定心脏增大的准确性为96.0%,灵敏度为79.5%,特异性为99.1%,AUC为0.988。与放射科医生人工计算的平均时间相比,深度学习方法计算每张胸片的时间要快(分别为2秒vs 25.75±4.35秒)
{"title":"Automated measurement of cardiothoracic ratio based on semantic segmentation integration model using deep learning.","authors":"Jiajun Feng, Yuqian Huang, Zhenbin Hu, Junjie Guo","doi":"10.1007/s11517-024-03263-0","DOIUrl":"https://doi.org/10.1007/s11517-024-03263-0","url":null,"abstract":"<p><p>The objective of this study is to investigate the efficacy of the semantic segmentation model in predicting cardiothoracic ratio (CTR) and heart enlargement and compare its consistency with the reference standard. A total of 650 consecutive chest radiographs from our center and 756 public datasets were retrospectively included to develop a segmentation model. Three semantic segmentation models were used to segment the heart and lungs. A soft voting integration method was used to improve the segmentation accuracy and measure CTR automatically. Bland-Altman and Pearson's correlation analyses were used to compare the consistency and correlation between CTR automated measurements and reference standards. CTR automated measurements were compared with reference standard using the Wilcoxon signed-rank test. The diagnostic efficacy of the model for heart enlargement was evaluated using the AUC. The soft voting integration model was strongly correlated (r = 0.98, P < 0.001) and consistent (average standard deviation of 0.0048 cm/s) with the reference standard. No statistical difference between CTR automated measurement and reference standard in healthy subjects, pneumothorax, pleural effusion, and lung mass patients (P > 0.05). In the external test data, the accuracy, sensitivity, specificity, and AUC in determining heart enlargement were 96.0%, 79.5%, 99.1%, and 0.988, respectively. The deep learning method was calculated faster per chest radiograph than the average time manually calculated by the radiologist (about 2 s vs 25.75 ± 4.35 s, respectively, P < 0.001). This study provides a semantic segmentation integration model of chest radiographs to measure CTR and determine heart enlargement with chest structure changes due to different chest diseases effectively, faster, and accurately. The development of the automated segmentation integration model is helpful in improving the consistency of CTR measurement, reducing the workload of radiologists, and improving their work efficiency.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142872615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}