Pub Date : 2024-09-25DOI: 10.1016/j.cmpb.2024.108438
Raffaele Giancotti , Pietro Bosoni , Patrizia Vizza , Giuseppe Tradigo , Agostino Gnasso , Pietro Hiram Guzzi , Riccardo Bellazzi , Concetta Irace , Pierangelo Veltri
<div><h3>Background:</h3><div>Type 1 Diabetes Mellitus (T1DM) is a chronic metabolic disease affecting millions of people worldwide. T1DM requires patients to continuously monitor their blood glucose levels. Due to pancreatic dysfunctions, patients use insulin injections to correct glucose values by synthetic insulin. Continuous Glucose Monitoring (CGM) is a system which includes an algorithm allowing to measure (and in some cases to predict) glucose levels at a frequent sampling time. This enable implementing advanced devices, including automated insulin pump delivery. Nevertheless, CGM still presents some limitations, including (i) the delay (time lag) in detecting change in glucose levels compared to the traditional blood glucose measurement, and (ii) the lack of a sufficient and acceptable time to accurately predict glucose values.</div></div><div><h3>Methods:</h3><div>We propose a framework based on a Gated Recurrent Unit (GRU) model to forecast both short- and long-term glucose values using heart rate (HR) and interstitial glucose (IG) values. The framework acquires HR and IG data and predicts glucose values with higher precision compared to state-of-the-art models. For training and testing the proposed framework, we used the OhioT1DM Dataset, which includes physiological data such as HR and IG values collected over an 8-week observation period. Additionally, we validated our framework using two other glucose datasets to ensure its generalizability across different HR and IG sampling frequencies. The proposed framework can be used to optimize the CGM system by incorporating patient HR measurements, thereby improving the prediction of short- and long-term glucose levels and reducing risks associated with conditions like hypoglycemia.</div></div><div><h3>Results:</h3><div>Experimental tests were conducted using HR and IG data from the OhioT1DM Dataset, as well as from two additional T1DM patient datasets. We analyzed 6 patients from Ohio dataset while we validated the algorithm on 23 patients coming from two different university hospitals (6 from the University of Catanzaro medical hospital and 17 gathered from a validated study at IRCCS San Matteo Hospital in Pavia) for a total number of 29 patients. Our framework demonstrates an improvement in forecasting IG values in terms of RMSE and MAE for different choice of prediction horizons (PH). In the case of a PH of 5, 10, 20, 30, and 60 min, we reach an RMSE of 5.0, 9.38, 15.27, 20.48, and 34.16 respectively. The framework is freely available as an open-source, with an example dataset on a GitHub repository (see <span><span>https://github.com/rafgia/attention_to_glycemia</span><svg><path></path></svg></span>).</div></div><div><h3>Conclusion:</h3><div>Our framework offers a promising solution for improving glucose level prediction and management in T1DM patients. By leveraging a GRU model and incorporating HR and IG values, we achieve more precise glucose level forecasting compared to state-of-t
{"title":"Forecasting glucose values for patients with type 1 diabetes using heart rate data","authors":"Raffaele Giancotti , Pietro Bosoni , Patrizia Vizza , Giuseppe Tradigo , Agostino Gnasso , Pietro Hiram Guzzi , Riccardo Bellazzi , Concetta Irace , Pierangelo Veltri","doi":"10.1016/j.cmpb.2024.108438","DOIUrl":"10.1016/j.cmpb.2024.108438","url":null,"abstract":"<div><h3>Background:</h3><div>Type 1 Diabetes Mellitus (T1DM) is a chronic metabolic disease affecting millions of people worldwide. T1DM requires patients to continuously monitor their blood glucose levels. Due to pancreatic dysfunctions, patients use insulin injections to correct glucose values by synthetic insulin. Continuous Glucose Monitoring (CGM) is a system which includes an algorithm allowing to measure (and in some cases to predict) glucose levels at a frequent sampling time. This enable implementing advanced devices, including automated insulin pump delivery. Nevertheless, CGM still presents some limitations, including (i) the delay (time lag) in detecting change in glucose levels compared to the traditional blood glucose measurement, and (ii) the lack of a sufficient and acceptable time to accurately predict glucose values.</div></div><div><h3>Methods:</h3><div>We propose a framework based on a Gated Recurrent Unit (GRU) model to forecast both short- and long-term glucose values using heart rate (HR) and interstitial glucose (IG) values. The framework acquires HR and IG data and predicts glucose values with higher precision compared to state-of-the-art models. For training and testing the proposed framework, we used the OhioT1DM Dataset, which includes physiological data such as HR and IG values collected over an 8-week observation period. Additionally, we validated our framework using two other glucose datasets to ensure its generalizability across different HR and IG sampling frequencies. The proposed framework can be used to optimize the CGM system by incorporating patient HR measurements, thereby improving the prediction of short- and long-term glucose levels and reducing risks associated with conditions like hypoglycemia.</div></div><div><h3>Results:</h3><div>Experimental tests were conducted using HR and IG data from the OhioT1DM Dataset, as well as from two additional T1DM patient datasets. We analyzed 6 patients from Ohio dataset while we validated the algorithm on 23 patients coming from two different university hospitals (6 from the University of Catanzaro medical hospital and 17 gathered from a validated study at IRCCS San Matteo Hospital in Pavia) for a total number of 29 patients. Our framework demonstrates an improvement in forecasting IG values in terms of RMSE and MAE for different choice of prediction horizons (PH). In the case of a PH of 5, 10, 20, 30, and 60 min, we reach an RMSE of 5.0, 9.38, 15.27, 20.48, and 34.16 respectively. The framework is freely available as an open-source, with an example dataset on a GitHub repository (see <span><span>https://github.com/rafgia/attention_to_glycemia</span><svg><path></path></svg></span>).</div></div><div><h3>Conclusion:</h3><div>Our framework offers a promising solution for improving glucose level prediction and management in T1DM patients. By leveraging a GRU model and incorporating HR and IG values, we achieve more precise glucose level forecasting compared to state-of-t","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108438"},"PeriodicalIF":4.9,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142322389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-24DOI: 10.1016/j.cmpb.2024.108441
Mohammad Reza Ghahramani, Omid Bavi
Background and Objective
Brain tumors are one of the most common diseases and causes of death in humans. Since the growth of brain tumors has irreparable risks for the patient, predicting the growth of the tumor and knowing its effect on the brain tissue will increase the efficiency of treatment strategies.
Methods
This study examines brain tumor growth using mathematical modeling based on the Reaction-Diffusion equation and the biomechanical model based on continuum mechanics principles. With the help of the image threshold technique of magnetic resonance images, a heterogeneous and close-to-reality environment of the brain has been modeled and experimental data validated the results to achieve maximum accuracy in predicting growth.
Results
The obtained results have been compared with the reported conventional models to evaluate the presented model. In addition to incorporating the chemotherapy effects in governing equations, the real-time finite element analysis of the stress tensors of the surrounding tissue of tumor cells and considering its role in changing the shape and growth of the tumor has added to the importance and accuracy of the current model.
Conclusions
The comparison of the obtained results with conventional models shows that the heterogeneous model has higher reliability due to the consideration of the appropriate properties for the different regions of the brain. The presented model can contribute to personalized medicine, aid in understanding the dynamics of tumor growth, optimize treatment regimens, and develop adaptive therapy strategies.
{"title":"Heterogeneous biomechanical/mathematical modeling of spatial prediction of glioblastoma progression using magnetic resonance imaging-based finite element method","authors":"Mohammad Reza Ghahramani, Omid Bavi","doi":"10.1016/j.cmpb.2024.108441","DOIUrl":"10.1016/j.cmpb.2024.108441","url":null,"abstract":"<div><h3>Background and Objective</h3><div>Brain tumors are one of the most common diseases and causes of death in humans. Since the growth of brain tumors has irreparable risks for the patient, predicting the growth of the tumor and knowing its effect on the brain tissue will increase the efficiency of treatment strategies.</div></div><div><h3>Methods</h3><div>This study examines brain tumor growth using mathematical modeling based on the Reaction-Diffusion equation and the biomechanical model based on continuum mechanics principles. With the help of the image threshold technique of magnetic resonance images, a heterogeneous and close-to-reality environment of the brain has been modeled and experimental data validated the results to achieve maximum accuracy in predicting growth.</div></div><div><h3>Results</h3><div>The obtained results have been compared with the reported conventional models to evaluate the presented model. In addition to incorporating the chemotherapy effects in governing equations, the real-time finite element analysis of the stress tensors of the surrounding tissue of tumor cells and considering its role in changing the shape and growth of the tumor has added to the importance and accuracy of the current model.</div></div><div><h3>Conclusions</h3><div>The comparison of the obtained results with conventional models shows that the heterogeneous model has higher reliability due to the consideration of the appropriate properties for the different regions of the brain. The presented model can contribute to personalized medicine, aid in understanding the dynamics of tumor growth, optimize treatment regimens, and develop adaptive therapy strategies.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108441"},"PeriodicalIF":4.9,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142356854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-24DOI: 10.1016/j.cmpb.2024.108433
Yun Bing , Tamás I. Józsa , Stephen J. Payne
Background and objective:
Oxygen is carried to the brain by blood flow through generations of vessels across a wide range of length scales. This multi-scale nature of blood flow and oxygen transport poses challenges on investigating the mechanisms underlying both healthy and pathological states through imaging techniques alone. Recently, multi-scale models describing whole brain perfusion and oxygen transport have been developed. Such models rely on effective parameters that represent the microscopic properties. While parameters of the perfusion models have been characterised, those for oxygen transport are still lacking. In this study, we set to quantify the parameters associated with oxygen transport and their uncertainties.
Methods:
Effective parameter values of a continuum-based porous multi-scale, multi-compartment oxygen transport model are systematically estimated. In particular, geometric parameters that capture the microvascular topologies are obtained through statistically accurate capillary networks. Maximum consumption rates of oxygen are optimised to uniquely define the oxygen distribution over depth. Simulations are then carried out within a one-dimensional tissue column and a three-dimensional patient-specific brain mesh using the finite element method.
Results:
Effective values of the geometric parameters, vessel volume fraction and surface area to volume ratio, are found to be 1.42% and 627 [mm/mm], respectively. These values compare well with those acquired from human and monkey vascular samples. Simulation results of the one-dimensional tissue column show qualitative agreement with experimental measurements of tissue oxygen partial pressure in rats. Differences between the oxygenation level in the tissue column and the brain mesh are observed, which highlights the importance of anatomical accuracy. Finally, one-at-a-time sensitivity analysis reveals that the oxygen model is not sensitive to most of its parameters; however, perturbations in oxygen solubilities and plasma to whole blood oxygen concentration ratio have a considerable impact on the tissue oxygenation.
Conclusions:
The findings of this study demonstrate the validity of using a porous continuum approach to model organ-scale oxygen transport and draw attention to the significance of anatomy and parameters associated with inter-compartment diffusion.
{"title":"Parameter quantification for oxygen transport in the human brain","authors":"Yun Bing , Tamás I. Józsa , Stephen J. Payne","doi":"10.1016/j.cmpb.2024.108433","DOIUrl":"10.1016/j.cmpb.2024.108433","url":null,"abstract":"<div><h3>Background and objective:</h3><div>Oxygen is carried to the brain by blood flow through generations of vessels across a wide range of length scales. This multi-scale nature of blood flow and oxygen transport poses challenges on investigating the mechanisms underlying both healthy and pathological states through imaging techniques alone. Recently, multi-scale models describing whole brain perfusion and oxygen transport have been developed. Such models rely on effective parameters that represent the microscopic properties. While parameters of the perfusion models have been characterised, those for oxygen transport are still lacking. In this study, we set to quantify the parameters associated with oxygen transport and their uncertainties.</div></div><div><h3>Methods:</h3><div>Effective parameter values of a continuum-based porous multi-scale, multi-compartment oxygen transport model are systematically estimated. In particular, geometric parameters that capture the microvascular topologies are obtained through statistically accurate capillary networks. Maximum consumption rates of oxygen are optimised to uniquely define the oxygen distribution over depth. Simulations are then carried out within a one-dimensional tissue column and a three-dimensional patient-specific brain mesh using the finite element method.</div></div><div><h3>Results:</h3><div>Effective values of the geometric parameters, vessel volume fraction and surface area to volume ratio, are found to be 1.42% and 627 [mm<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>/mm<span><math><msup><mrow></mrow><mrow><mn>3</mn></mrow></msup></math></span>], respectively. These values compare well with those acquired from human and monkey vascular samples. Simulation results of the one-dimensional tissue column show qualitative agreement with experimental measurements of tissue oxygen partial pressure in rats. Differences between the oxygenation level in the tissue column and the brain mesh are observed, which highlights the importance of anatomical accuracy. Finally, one-at-a-time sensitivity analysis reveals that the oxygen model is not sensitive to most of its parameters; however, perturbations in oxygen solubilities and plasma to whole blood oxygen concentration ratio have a considerable impact on the tissue oxygenation.</div></div><div><h3>Conclusions:</h3><div>The findings of this study demonstrate the validity of using a porous continuum approach to model organ-scale oxygen transport and draw attention to the significance of anatomy and parameters associated with inter-compartment diffusion.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108433"},"PeriodicalIF":4.9,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142371212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-21DOI: 10.1016/j.cmpb.2024.108416
Seo-Hee Kim , Sun Young Park , Hyungseok Seo , Jiyoung Woo
Background:
In predicting post-operative outcomes for patients with end-stage renal disease, our study faced challenges related to class imbalance and a high-dimensional feature space. Therefore, with a focus on overcoming class imbalance and improving interpretability, we propose a novel feature selection approach using multi-agent reinforcement learning.
Methods:
We proposed a multi-agent feature selection model based on a comprehensive reward function that combines classification model performance, Shapley additive explanations values, and the mutual information. The definition of rewards in reinforcement learning is crucial for model convergence and performance improvement. Initially, we set a deterministic reward based on the mutual information between variables and the target class, selecting variables that are highly dependent on the class, thus accelerating convergence. We then prioritized variables that influence the minority class on a sample basis and introduced a dynamic reward distribution strategy using Shapley additive explanations values to improve interpretability and solve the class imbalance problem.
Results:
Involving the integration of electronic medical records, anesthesia records, operating room vital signs, and pre-operative anesthesia evaluations, our approach effectively mitigated class imbalance and demonstrated superior performance in ablation analysis. Our model achieved a 16% increase in the minority class F1 score and an 8.2% increase in the overall F1 score compared to the baseline model without feature selection.
Conclusion:
This study contributes important research findings that show that the multi-agent-based feature selection method can be a promising approach for solving the class imbalance problem.
研究背景在预测终末期肾病患者术后预后时,我们的研究面临着类不平衡和高维特征空间的挑战。因此,为了克服类不平衡和提高可解释性,我们提出了一种使用多代理强化学习的新型特征选择方法:我们提出了一种基于综合奖励函数的多代理特征选择模型,该函数结合了分类模型性能、夏普利加法解释值和互信息。强化学习中奖励的定义对模型收敛和性能改进至关重要。起初,我们根据变量与目标类别之间的互信息设置确定性奖励,选择与类别高度相关的变量,从而加速收敛。然后,我们在样本的基础上对影响少数类的变量进行优先排序,并采用 Shapley 加法解释值引入动态奖励分配策略,以提高可解释性并解决类不平衡问题:我们的方法整合了电子病历、麻醉记录、手术室生命体征和术前麻醉评估,有效缓解了类失衡问题,并在消融分析中表现出卓越的性能。与未进行特征选择的基线模型相比,我们的模型使少数类别的 F1 分数提高了 16%,整体 F1 分数提高了 8.2%:本研究提供了重要的研究成果,表明基于多代理的特征选择方法是解决类不平衡问题的一种有前途的方法。
{"title":"Feature selection integrating Shapley values and mutual information in reinforcement learning: An application in the prediction of post-operative outcomes in patients with end-stage renal disease","authors":"Seo-Hee Kim , Sun Young Park , Hyungseok Seo , Jiyoung Woo","doi":"10.1016/j.cmpb.2024.108416","DOIUrl":"10.1016/j.cmpb.2024.108416","url":null,"abstract":"<div><h3>Background:</h3><div>In predicting post-operative outcomes for patients with end-stage renal disease, our study faced challenges related to class imbalance and a high-dimensional feature space. Therefore, with a focus on overcoming class imbalance and improving interpretability, we propose a novel feature selection approach using multi-agent reinforcement learning.</div></div><div><h3>Methods:</h3><div>We proposed a multi-agent feature selection model based on a comprehensive reward function that combines classification model performance, Shapley additive explanations values, and the mutual information. The definition of rewards in reinforcement learning is crucial for model convergence and performance improvement. Initially, we set a deterministic reward based on the mutual information between variables and the target class, selecting variables that are highly dependent on the class, thus accelerating convergence. We then prioritized variables that influence the minority class on a sample basis and introduced a dynamic reward distribution strategy using Shapley additive explanations values to improve interpretability and solve the class imbalance problem.</div></div><div><h3>Results:</h3><div>Involving the integration of electronic medical records, anesthesia records, operating room vital signs, and pre-operative anesthesia evaluations, our approach effectively mitigated class imbalance and demonstrated superior performance in ablation analysis. Our model achieved a 16% increase in the minority class F1 score and an 8.2% increase in the overall F1 score compared to the baseline model without feature selection.</div></div><div><h3>Conclusion:</h3><div>This study contributes important research findings that show that the multi-agent-based feature selection method can be a promising approach for solving the class imbalance problem.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108416"},"PeriodicalIF":4.9,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142343063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-21DOI: 10.1016/j.cmpb.2024.108431
Wei Li , Huixia Zhang , Linjie Wang , Pengyun Wang , Kun Yu
Background and Objective:
Spatially resolved gene expression profiles are crucial for understanding tissue structure and function. However, the lack of single-cell resolution in these profiles demands their integration with single-cell RNA sequencing data for accurate dataset deconvolution. We propose STGAT, an innovative deconvolution method that leverages graph attention networks to enhance spatial transcriptomic (ST) data analysis.
Methods:
STGAT generates pseudo-ST data that more comprehensively represents the cell-type composition within real-ST data by using three different sampling probabilities. A comprehensive combined graph is then constructed to capture the complex relationships both across pseudo- and real-ST data and within each dataset. Moreover, integrating a graph attention network further enables STGAT to dynamically assign the weights to the connections between spots, significantly enhancing the accuracy of cell-type composition predictions.
Results:
Extensive comparative experiments on simulated and real-world datasets, demonstrate the superior performance of STGAT for cell-type deconvolution. The method outperforms six established methods and is robust across various biological contexts.
Conclusion:
STGAT exhibits more precise results in cell-type composition inference that are more consistent with the known knowledge, suggesting its potential utility in improving the resolution and accuracy of spatial transcriptomics data analysis.
背景和目的:空间分辨率的基因表达谱对于了解组织结构和功能至关重要。然而,由于这些图谱缺乏单细胞分辨率,因此需要将其与单细胞 RNA 测序数据整合,以实现准确的数据集解卷。我们提出的 STGAT 是一种创新的解卷积方法,它利用图注意网络来加强空间转录组(ST)数据分析:STGAT 通过使用三种不同的采样概率生成伪 ST 数据,从而更全面地反映真实 ST 数据中的细胞类型组成。方法:STGAT 利用三种不同的采样概率生成伪 ST 数据,更全面地反映真实 ST 数据中的细胞类型组成,然后构建综合的组合图,捕捉伪 ST 数据和真实 ST 数据之间以及每个数据集内部的复杂关系。此外,通过整合图注意网络,STGAT 还能动态分配点之间连接的权重,从而显著提高细胞类型组成预测的准确性:结果:在模拟和真实世界数据集上进行的广泛对比实验证明,STGAT 在细胞类型解卷积方面具有卓越的性能。结果:在模拟和真实世界数据集上进行的大量对比实验证明了 STGAT 在细胞类型解卷积方面的优越性能,该方法优于六种成熟的方法,并且在各种生物环境下都很稳定:STGAT在细胞类型组成推断方面表现出更精确的结果,与已知知识更加一致,这表明它在提高空间转录组学数据分析的分辨率和准确性方面具有潜在的实用性。
{"title":"STGAT: Graph attention networks for deconvolving spatial transcriptomics data","authors":"Wei Li , Huixia Zhang , Linjie Wang , Pengyun Wang , Kun Yu","doi":"10.1016/j.cmpb.2024.108431","DOIUrl":"10.1016/j.cmpb.2024.108431","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Spatially resolved gene expression profiles are crucial for understanding tissue structure and function. However, the lack of single-cell resolution in these profiles demands their integration with single-cell RNA sequencing data for accurate dataset deconvolution. We propose STGAT, an innovative deconvolution method that leverages graph attention networks to enhance spatial transcriptomic (ST) data analysis.</div></div><div><h3>Methods:</h3><div>STGAT generates pseudo-ST data that more comprehensively represents the cell-type composition within real-ST data by using three different sampling probabilities. A comprehensive combined graph is then constructed to capture the complex relationships both across pseudo- and real-ST data and within each dataset. Moreover, integrating a graph attention network further enables STGAT to dynamically assign the weights to the connections between spots, significantly enhancing the accuracy of cell-type composition predictions.</div></div><div><h3>Results:</h3><div>Extensive comparative experiments on simulated and real-world datasets, demonstrate the superior performance of STGAT for cell-type deconvolution. The method outperforms six established methods and is robust across various biological contexts.</div></div><div><h3>Conclusion:</h3><div>STGAT exhibits more precise results in cell-type composition inference that are more consistent with the known knowledge, suggesting its potential utility in improving the resolution and accuracy of spatial transcriptomics data analysis.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108431"},"PeriodicalIF":4.9,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142496562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-20DOI: 10.1016/j.cmpb.2024.108436
Priscila de Oliveira Bressane Lima , Jan van de Kassteele , Maarten Schipper , Naomi Smorenburg , Martijn S․ van Rooijen , Janneke Heijne , Rolina D․ van Gaalen
Background
During the COVID-19 pandemic, the National Institute for Public Health and the Environment in the Netherlands developed a pipeline of scripts to automate and streamline the production of epidemiological situation reports (epi‑sitrep). The pipeline was developed for the Automation of Data Import, Summarization, and Communication (hereafter called the A-DISC pipeline).
Objective
This paper describes the A-DISC pipeline and provides a customizable scripts template that may be useful for other countries wanting to automate their infectious disease surveillance processes.
Methods
The A-DISC pipeline was developed using the open-source statistical software R. It is organized in four modules: Prepare, Process data, Produce report, and Communicate. The Prepare scripts set the working environment (e.g., load packages). The (data-specific) Process data scripts import, validate, verify, transform, save, analyze, and summarize data as tables and figures and store these data summaries. The Produce report scripts gather summaries from multiple data sources and integrate them into a RMarkdown document – the epi‑sitrep. The Communicate scripts send e-mails to stakeholders with the epi‑sitrep.
Results
As of March 2023, up to ten data sources were automatically summarized into tables and figures by A-DISC. These data summaries were featured in routine extensive COVID-19 epi‑sitreps, shared as open data, plotted on RIVM's website, sent to stakeholders and submitted to European Centre for Disease Prevention and Control via the European Surveillance System -TESSy [38].
Discussion
In the face of an unprecedented high number of cases being reported during the COVID-19 pandemic, the A-DISC pipeline was essential to produce frequent and comprehensive epi‑sitreps. A-DISC's modular and intuitive structure allowed for the integration of data sources of varying complexities, encouraged collaboration among people with various R-scripting capabilities, and improved data lineage. The A-DISC pipeline remains under active development and is currently being used in modified form for the automatization and professionalization of various other disease surveillance processes at the RIVM, with high acceptance from the participant epidemiologists.
Conclusion
The A-DISC pipeline is an open-source, robust, and customizable tool for automating epi‑sitreps based on multiple data sources.
背景:在 COVID-19 大流行期间,荷兰国家公共卫生与环境研究所(National Institute for Public Health and the Environment in the Netherlands)开发了一个脚本管道,用于自动化和简化流行病学情况报告(epi-sitrep)的制作。该管道是为数据导入、汇总和交流自动化(以下简称 A-DISC 管道)而开发的:本文介绍了 A-DISC 管道,并提供了一个可定制的脚本模板,该模板可能对希望实现传染病监测过程自动化的其他国家有用:A-DISC 管道是使用开源统计软件 R 开发的:准备、处理数据、生成报告和交流。准备脚本设置工作环境(如加载软件包)。特定数据)处理数据脚本将数据导入、验证、校验、转换、保存、分析和汇总为表格和数字,并存储这些数据汇总。制作报告脚本从多个数据源收集摘要,并将其整合到 RMarkdown 文档--epi-sitrep。交流脚本会向利益相关者发送带有 epi-sitrep.Results 的电子邮件:截至 2023 年 3 月,A-DISC 系统已将多达十个数据源自动汇总为表格和图表。这些数据摘要在 COVID-19 广泛的 epi-sitreps 中进行了例行介绍,作为开放数据进行了共享,在 RIVM 网站上进行了绘制,发送给了利益相关者,并通过欧洲监测系统 -TESSy [38]提交给了欧洲疾病预防与控制中心:在 COVID-19 大流行期间,面对前所未有的大量病例报告,A-DISC 管道对于频繁、全面地制作外 观病例报告至关重要。A-DISC 的模块化和直观结构允许整合不同复杂程度的数据源,鼓励具有不同 R 脚本能力的人员进行协作,并改善了数据序列。A-DISC 管道仍在积极开发中,目前正以修改后的形式用于 RIVM 其他各种疾病监测流程的自动化和专业化,并得到了参与流行病学家的高度认可:结论:A-DISC 管道是一个开源、强大且可定制的工具,可用于基于多种数据源的表观病例自动监测。
{"title":"Automating COVID-19 epidemiological situation reports based on multiple data sources, the Netherlands, 2020 to 2023","authors":"Priscila de Oliveira Bressane Lima , Jan van de Kassteele , Maarten Schipper , Naomi Smorenburg , Martijn S․ van Rooijen , Janneke Heijne , Rolina D․ van Gaalen","doi":"10.1016/j.cmpb.2024.108436","DOIUrl":"10.1016/j.cmpb.2024.108436","url":null,"abstract":"<div><h3>Background</h3><div>During the COVID-19 pandemic, the National Institute for Public Health and the Environment in the Netherlands developed a pipeline of scripts to automate and streamline the production of epidemiological situation reports (epi‑sitrep). The pipeline was developed for the Automation of Data Import, Summarization, and Communication (hereafter called the A-DISC pipeline).</div></div><div><h3>Objective</h3><div>This paper describes the A-DISC pipeline and provides a customizable scripts template that may be useful for other countries wanting to automate their infectious disease surveillance processes.</div></div><div><h3>Methods</h3><div>The A-DISC pipeline was developed using the open-source statistical software R. It is organized in four modules: <em>Prepare, Process data, Produce report</em>, and <em>Communicate.</em> The <em>Prepare</em> scripts set the working environment (e.g., load packages). The (data-specific) <em>Process data</em> scripts import, validate, verify, transform, save, analyze, and summarize data as tables and figures and store these data summaries. The <em>Produce report</em> scripts gather summaries from multiple data sources and integrate them into a RMarkdown document – the epi‑sitrep. The <em>Communicate</em> scripts send e-mails to stakeholders with the epi‑sitrep.</div></div><div><h3>Results</h3><div>As of March 2023, up to ten data sources were automatically summarized into tables and figures by A-DISC. These data summaries were featured in routine extensive COVID-19 epi‑sitreps, shared as open data, plotted on RIVM's website, sent to stakeholders and submitted to European Centre for Disease Prevention and Control via the European Surveillance System -TESSy [<span><span>38</span></span>].</div></div><div><h3>Discussion</h3><div>In the face of an unprecedented high number of cases being reported during the COVID-19 pandemic, the A-DISC pipeline was essential to produce frequent and comprehensive epi‑sitreps. A-DISC's modular and intuitive structure allowed for the integration of data sources of varying complexities, encouraged collaboration among people with various R-scripting capabilities, and improved data lineage. The A-DISC pipeline remains under active development and is currently being used in modified form for the automatization and professionalization of various other disease surveillance processes at the RIVM, with high acceptance from the participant epidemiologists.</div></div><div><h3>Conclusion</h3><div>The A-DISC pipeline is an open-source, robust, and customizable tool for automating epi‑sitreps based on multiple data sources.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108436"},"PeriodicalIF":4.9,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142343062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-20DOI: 10.1016/j.cmpb.2024.108427
Ahmet Sen , Elnaz Ghajar-Rahimi , Miquel Aguirre , Laurent Navarro , Craig J. Goergen , Stephane Avril
Background and Objective:
Computational models of hemodynamics can contribute to optimizing surgical plans, and improve our understanding of cardiovascular diseases. Recently, machine learning methods have become essential to reduce the computational cost of these models. In this study, we propose a method that integrates 1-D blood flow equations with Physics-Informed Graph Neural Networks (PIGNNs) to estimate the propagation of blood flow velocity and lumen area pulse waves along arteries.
Methods:
Our methodology involves the creation of a graph based on arterial topology, where each 1-D line represents edges and nodes in the blood flow analysis. The innovation lies in decoding the mathematical data connecting the nodes, where each node has velocity and lumen area pulse waveform outputs. The training protocol for PIGNNs involves measurement data, specifically velocity waves measured from inlet and outlet vessels and diastolic lumen area measurements from each vessel. To optimize the learning process, our approach incorporates fundamental physical principles directly into the loss function. This comprehensive training strategy not only harnesses the power of machine learning but also ensures that PIGNNs respect fundamental laws governing fluid dynamics.
Results:
The accuracy was validated in silico with different arterial networks, where PIGNNs achieved a coefficient of determination () consistently above 0.99, comparable to numerical methods like the discontinuous Galerkin scheme. Moreover, with in vivo data, the prediction reached values greater than 0.80, demonstrating the method’s effectiveness in predicting flow and lumen dynamics using minimal data.
Conclusions:
This study showcased the ability to calculate lumen area and blood flow rate in blood vessels within a given topology by seamlessly integrating 1-D blood flow with PIGNNs, using only blood flow velocity measurements. Moreover, this study is the first to compare the PIGNNs method with other classic Physics-Informed Neural Network (PINNs) approaches for blood flow simulation. Our findings highlight the potential to use this cost-effective and proficient tool to estimate real-time arterial pulse waves.
{"title":"Physics-Informed Graph Neural Networks to solve 1-D equations of blood flow","authors":"Ahmet Sen , Elnaz Ghajar-Rahimi , Miquel Aguirre , Laurent Navarro , Craig J. Goergen , Stephane Avril","doi":"10.1016/j.cmpb.2024.108427","DOIUrl":"10.1016/j.cmpb.2024.108427","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Computational models of hemodynamics can contribute to optimizing surgical plans, and improve our understanding of cardiovascular diseases. Recently, machine learning methods have become essential to reduce the computational cost of these models. In this study, we propose a method that integrates 1-D blood flow equations with Physics-Informed Graph Neural Networks (PIGNNs) to estimate the propagation of blood flow velocity and lumen area pulse waves along arteries.</div></div><div><h3>Methods:</h3><div>Our methodology involves the creation of a graph based on arterial topology, where each 1-D line represents edges and nodes in the blood flow analysis. The innovation lies in decoding the mathematical data connecting the nodes, where each node has velocity and lumen area pulse waveform outputs. The training protocol for PIGNNs involves measurement data, specifically velocity waves measured from inlet and outlet vessels and diastolic lumen area measurements from each vessel. To optimize the learning process, our approach incorporates fundamental physical principles directly into the loss function. This comprehensive training strategy not only harnesses the power of machine learning but also ensures that PIGNNs respect fundamental laws governing fluid dynamics.</div></div><div><h3>Results:</h3><div>The accuracy was validated <em>in silico</em> with different arterial networks, where PIGNNs achieved a coefficient of determination (<span><math><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span>) consistently above 0.99, comparable to numerical methods like the discontinuous Galerkin scheme. Moreover, with <em>in vivo</em> data, the prediction reached <span><math><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup></math></span> values greater than 0.80, demonstrating the method’s effectiveness in predicting flow and lumen dynamics using minimal data.</div></div><div><h3>Conclusions:</h3><div>This study showcased the ability to calculate lumen area and blood flow rate in blood vessels within a given topology by seamlessly integrating 1-D blood flow with PIGNNs, using only blood flow velocity measurements. Moreover, this study is the first to compare the PIGNNs method with other classic Physics-Informed Neural Network (PINNs) approaches for blood flow simulation. Our findings highlight the potential to use this cost-effective and proficient tool to estimate real-time arterial pulse waves.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108427"},"PeriodicalIF":4.9,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142319229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-19DOI: 10.1016/j.cmpb.2024.108422
Benjamin Murray , Richard Brown , Pengcheng Ma , Eric Kerfoot , Daguang Xu , Andrew Feng , Jorge Cardoso , Sebastien Ourselin , Marc Modat
Background and Objective:
Preprocessing of data is a vital step for almost all deep learning workflows. In computer vision, manipulation of data intensity and spatial properties can improve network stability and can provide an important source of generalisation for deep neural networks. Models are frequently trained with preprocessing pipelines composed of many stages, but these pipelines come with a drawback; each stage that resamples the data costs time, degrades image quality, and adds bias to the output. Long pipelines can also be complex to design, especially in medical imaging, where cropping data early can cause significant artifacts.
Methods:
We present Lazy Resampling, a software that rephrases spatial preprocessing operations as a graphics pipeline. Rather than each transform individually modifying the data, the transforms generate transform descriptions that are composited together into a single resample operation wherever possible. This reduces pipeline execution time and, most importantly, limits signal degradation. It enables simpler pipeline design as crops and other operations become non-destructive. Lazy Resampling is designed in such a way that it provides the maximum benefit to users without requiring them to understand the underlying concepts or change the way that they build pipelines.
Results:
We evaluate Lazy Resampling by comparing traditional pipelines and the corresponding lazy resampling pipeline for the following tasks on Medical Segmentation Decathlon datasets. We demonstrate lower information loss in lazy pipelines vs. traditional pipelines. We demonstrate that Lazy Resampling can avoid catastrophic loss of semantic segmentation label accuracy occurring in traditional pipelines when passing labels through a pipeline and then back through the inverted pipeline. Finally, we demonstrate statistically significant improvements when training UNets for semantic segmentation.
Conclusion:
Lazy Resampling reduces the loss of information that occurs when running processing pipelines that traditionally have multiple resampling steps and enables researchers to build simpler pipelines by making operations such as rotation and cropping effectively non-destructive. It makes it possible to invert labels back through a pipeline without catastrophic loss of accuracy.
A reference implementation for Lazy Resampling can be found at https://github.com/KCL-BMEIS/LazyResampling. Lazy Resampling is being implemented as a core feature in MONAI, an open source python-based deep learning library for medical imaging, with a roadmap for a full integration.
{"title":"Lazy Resampling: Fast and information preserving preprocessing for deep learning","authors":"Benjamin Murray , Richard Brown , Pengcheng Ma , Eric Kerfoot , Daguang Xu , Andrew Feng , Jorge Cardoso , Sebastien Ourselin , Marc Modat","doi":"10.1016/j.cmpb.2024.108422","DOIUrl":"10.1016/j.cmpb.2024.108422","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Preprocessing of data is a vital step for almost all deep learning workflows. In computer vision, manipulation of data intensity and spatial properties can improve network stability and can provide an important source of generalisation for deep neural networks. Models are frequently trained with preprocessing pipelines composed of many stages, but these pipelines come with a drawback; each stage that resamples the data costs time, degrades image quality, and adds bias to the output. Long pipelines can also be complex to design, especially in medical imaging, where cropping data early can cause significant artifacts.</div></div><div><h3>Methods:</h3><div>We present Lazy Resampling, a software that rephrases spatial preprocessing operations as a graphics pipeline. Rather than each transform individually modifying the data, the transforms generate transform descriptions that are composited together into a single resample operation wherever possible. This reduces pipeline execution time and, most importantly, limits signal degradation. It enables simpler pipeline design as crops and other operations become non-destructive. Lazy Resampling is designed in such a way that it provides the maximum benefit to users without requiring them to understand the underlying concepts or change the way that they build pipelines.</div></div><div><h3>Results:</h3><div>We evaluate Lazy Resampling by comparing traditional pipelines and the corresponding lazy resampling pipeline for the following tasks on Medical Segmentation Decathlon datasets. We demonstrate lower information loss in lazy pipelines vs. traditional pipelines. We demonstrate that Lazy Resampling can avoid catastrophic loss of semantic segmentation label accuracy occurring in traditional pipelines when passing labels through a pipeline and then back through the inverted pipeline. Finally, we demonstrate statistically significant improvements when training UNets for semantic segmentation.</div></div><div><h3>Conclusion:</h3><div>Lazy Resampling reduces the loss of information that occurs when running processing pipelines that traditionally have multiple resampling steps and enables researchers to build simpler pipelines by making operations such as rotation and cropping effectively non-destructive. It makes it possible to invert labels back through a pipeline without catastrophic loss of accuracy.</div><div>A reference implementation for Lazy Resampling can be found at <span><span>https://github.com/KCL-BMEIS/LazyResampling</span><svg><path></path></svg></span>. Lazy Resampling is being implemented as a core feature in MONAI, an open source python-based deep learning library for medical imaging, with a roadmap for a full integration.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108422"},"PeriodicalIF":4.9,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142421046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-19DOI: 10.1016/j.cmpb.2024.108434
Haowen Zhao , Xu Zhang , Xiang Chen , Ping Zhou
Background and objective
Electrode shift is always one of the critical factors to compromise the performance of myoelectric pattern recognition (MPR) based on surface electromyogram (SEMG). However, current studies focused on the global features of SEMG signals to mitigate this issue but it is just an oversimplified description of the human movements without incorporating microscopic neural drive information. The objective of this work is to develop a novel method for calibrating the electrode array shifts toward achieving robust MPR, leveraging individual motor unit (MU) activities obtained through advanced SEMG decomposition.
Methods
All of the MUs from decomposition of SEMG data recorded at the original electrode array position were first initialized to train a neural network for pattern recognition. A part of decomposed MUs could be tracked and paired with MUs obtained at the original position based on spatial distribution of their MUAP waveforms, so as to determine the shift vector (describing both the orientation and distance of the shift) implicated consistently by these multiple MU pairs. Given the known shift vector, the features of the after-shift decomposed MUs were corrected accordingly and then fed into the network to finalize the MPR task. The performance of the proposed method was evaluated with data recorded by a 16 × 8 electrode array placed over the finger extensor muscles of 8 subjects performing 10 finger movement patterns.
Results
The proposed method achieved a shift detection accuracy of 100 % and a pattern recognition accuracy approximating to 100 %, significantly outperforming the conventional methods with lower shift detection accuracies and lower pattern recognition accuracies (p < 0.05).
Conclusions
Our method demonstrated the feasibility of using decomposed MUAP waveforms’ spatial distributions to calibrate electrode shift. This study provides a new tool to enhance the robustness of myoelectric control systems via microscopic neural drive information at an individual MU level.
背景和目的电极偏移始终是影响基于表面肌电图(SEMG)的肌电模式识别(MPR)性能的关键因素之一。然而,目前的研究侧重于 SEMG 信号的全局特征来缓解这一问题,但这只是对人体运动的一种过于简化的描述,没有纳入微观神经驱动信息。这项工作的目的是开发一种校准电极阵列移动的新方法,利用通过高级 SEMG 分解获得的单个运动单元(MU)活动实现稳健的 MPR。分解后的部分 MU 可根据其 MUAP 波形的空间分布进行跟踪,并与在原始位置获得的 MU 配对,从而确定这些多 MU 配对一致牵连的移位向量(描述移位的方向和距离)。根据已知的移位矢量,对移位后分解的 MU 的特征进行相应修正,然后输入网络,最终完成 MPR 任务。通过对 8 名受试者手指伸肌上 16 × 8 电极阵列记录的数据进行评估,评估了所提方法的性能。结果所提方法的移位检测准确率达到 100%,模式识别准确率接近 100%,明显优于移位检测准确率较低和模式识别准确率较低的传统方法(p < 0.05)。这项研究提供了一种新工具,通过单个 MU 水平的微观神经驱动信息来增强肌电控制系统的鲁棒性。
{"title":"A robust myoelectric pattern recognition framework based on individual motor unit activities against electrode array shifts","authors":"Haowen Zhao , Xu Zhang , Xiang Chen , Ping Zhou","doi":"10.1016/j.cmpb.2024.108434","DOIUrl":"10.1016/j.cmpb.2024.108434","url":null,"abstract":"<div><h3>Background and objective</h3><div>Electrode shift is always one of the critical factors to compromise the performance of myoelectric pattern recognition (MPR) based on surface electromyogram (SEMG). However, current studies focused on the global features of SEMG signals to mitigate this issue but it is just an oversimplified description of the human movements without incorporating microscopic neural drive information. The objective of this work is to develop a novel method for calibrating the electrode array shifts toward achieving robust MPR, leveraging individual motor unit (MU) activities obtained through advanced SEMG decomposition.</div></div><div><h3>Methods</h3><div>All of the MUs from decomposition of SEMG data recorded at the original electrode array position were first initialized to train a neural network for pattern recognition. A part of decomposed MUs could be tracked and paired with MUs obtained at the original position based on spatial distribution of their MUAP waveforms, so as to determine the shift vector (describing both the orientation and distance of the shift) implicated consistently by these multiple MU pairs. Given the known shift vector, the features of the after-shift decomposed MUs were corrected accordingly and then fed into the network to finalize the MPR task. The performance of the proposed method was evaluated with data recorded by a 16 × 8 electrode array placed over the finger extensor muscles of 8 subjects performing 10 finger movement patterns.</div></div><div><h3>Results</h3><div>The proposed method achieved a shift detection accuracy of 100 % and a pattern recognition accuracy approximating to 100 %, significantly outperforming the conventional methods with lower shift detection accuracies and lower pattern recognition accuracies (<em>p</em> < 0.05).</div></div><div><h3>Conclusions</h3><div>Our method demonstrated the feasibility of using decomposed MUAP waveforms’ spatial distributions to calibrate electrode shift. This study provides a new tool to enhance the robustness of myoelectric control systems via microscopic neural drive information at an individual MU level.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108434"},"PeriodicalIF":4.9,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142326264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-19DOI: 10.1016/j.cmpb.2024.108425
Xiaoyan Wang , Hongzhi Qi
Background and objective
Motor Imagery (MI) recognition is one of the most critical decoding problems in brain- computer interface field. Combined with the steady-state somatosensory evoked potential (MI-SSSEP), this new paradigm can achieve higher recognition accuracy than the traditional MI paradigm. Typical algorithms do not fully consider the characteristics of MI-SSSEP signals. Developing an algorithm that fully captures the paradigm's characteristics to reduce false triggering rate is the new step in improving performance.
Methods
The idea to use complex signal task-related component analysis (cTRCA) algorithm for spatial filtering processing has been proposed in this paper according to the features of SSSEP signal. In this research, it's proved from the analysis of simulation signals that task-related component analysis (TRCA) as typical method is affected when the response between stimuli has reduced correlation and the proposed algorithm can effectively overcome this problem. The experimental data under the MI-SSSEP paradigm have been used to identify right-handed target tasks and three unique interference tasks are used to test the false triggering rate. cTRCA demonstrates superior performance as confirmed by the Wilcoxon signed-rank test.
Results
The recognition algorithm of cTRCA combined with mutual information-based best individual feature (MIBIF) and minimum distance to mean (MDM) can obtain AUC value up to 0.89, which is much higher than traditional algorithm common spatial pattern (CSP) combined with support vector machine (SVM) (the average AUC value is 0.77, p < 0.05). Compared to CSP+SVM, this algorithm model reduced the false triggering rate from 38.69 % to 20.74 % (p < 0.001).
Conclusions
The research prove that TRCA is influenced by MI-SSSEP signals. The results further prove that the motor imagery task in the new paradigm MI-SSSEP causes the phase change in evoked potential. and the cTRCA algorithm based on such phase change is more suitable for this hybrid paradigm and more conducive to decoding the motor imagery task and reducing false triggering rate.
{"title":"Decoding motor imagery loaded on steady-state somatosensory evoked potential based on complex task-related component analysis","authors":"Xiaoyan Wang , Hongzhi Qi","doi":"10.1016/j.cmpb.2024.108425","DOIUrl":"10.1016/j.cmpb.2024.108425","url":null,"abstract":"<div><h3>Background and objective</h3><div>Motor Imagery (MI) recognition is one of the most critical decoding problems in brain- computer interface field. Combined with the steady-state somatosensory evoked potential (MI-SSSEP), this new paradigm can achieve higher recognition accuracy than the traditional MI paradigm. Typical algorithms do not fully consider the characteristics of MI-SSSEP signals. Developing an algorithm that fully captures the paradigm's characteristics to reduce false triggering rate is the new step in improving performance.</div></div><div><h3>Methods</h3><div>The idea to use complex signal task-related component analysis (cTRCA) algorithm for spatial filtering processing has been proposed in this paper according to the features of SSSEP signal. In this research, it's proved from the analysis of simulation signals that task-related component analysis (TRCA) as typical method is affected when the response between stimuli has reduced correlation and the proposed algorithm can effectively overcome this problem. The experimental data under the MI-SSSEP paradigm have been used to identify right-handed target tasks and three unique interference tasks are used to test the false triggering rate. cTRCA demonstrates superior performance as confirmed by the Wilcoxon signed-rank test.</div></div><div><h3>Results</h3><div>The recognition algorithm of cTRCA combined with mutual information-based best individual feature (MIBIF) and minimum distance to mean (MDM) can obtain AUC value up to 0.89, which is much higher than traditional algorithm common spatial pattern (CSP) combined with support vector machine (SVM) (the average AUC value is 0.77, <em>p</em> < 0.05). Compared to CSP+SVM, this algorithm model reduced the false triggering rate from 38.69 % to 20.74 % (<em>p</em> < 0.001).</div></div><div><h3>Conclusions</h3><div>The research prove that TRCA is influenced by MI-SSSEP signals. The results further prove that the motor imagery task in the new paradigm MI-SSSEP causes the phase change in evoked potential. and the cTRCA algorithm based on such phase change is more suitable for this hybrid paradigm and more conducive to decoding the motor imagery task and reducing false triggering rate.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108425"},"PeriodicalIF":4.9,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142315333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}