Pub Date : 2025-01-27DOI: 10.1186/s40708-024-00249-4
Colin Birkenbihl, Madison Cuppels, Rory T Boyle, Hannah M Klinger, Oliver Langford, Gillian T Coughlan, Michael J Properzi, Jasmeer Chhatwal, Julie C Price, Aaron P Schultz, Dorene M Rentz, Rebecca E Amariglio, Keith A Johnson, Rebecca F Gottesman, Shubhabrata Mukherjee, Paul Maruff, Yen Ying Lim, Colin L Masters, Alexa Beiser, Susan M Resnick, Timothy M Hughes, Samantha Burnham, Ilke Tunali, Susan Landau, Ann D Cohen, Sterling C Johnson, Tobey J Betthauser, Sudha Seshadri, Samuel N Lockhart, Sid E O'Bryant, Prashanthi Vemuri, Reisa A Sperling, Timothy J Hohman, Michael C Donohue, Rachel F Buckley
Cognitive resilience (CR) describes the phenomenon of individuals evading cognitive decline despite prominent Alzheimer's disease neuropathology. Operationalization and measurement of this latent construct is non-trivial as it cannot be directly observed. The residual approach has been widely applied to estimate CR, where the degree of resilience is estimated through a linear model's residuals. We demonstrate that this approach makes specific, uncontrollable assumptions and likely leads to biased and erroneous resilience estimates. This is especially true when information about CR is contained in the data the linear model was fitted to, either through inclusion of CR-associated variables or due to correlation. We propose an alternative strategy which overcomes the standard approach's limitations using machine learning principles. Our proposed approach makes fewer assumptions about the data and CR and achieves better estimation accuracy on simulated ground-truth data.
{"title":"Rethinking the residual approach: leveraging statistical learning to operationalize cognitive resilience in Alzheimer's disease.","authors":"Colin Birkenbihl, Madison Cuppels, Rory T Boyle, Hannah M Klinger, Oliver Langford, Gillian T Coughlan, Michael J Properzi, Jasmeer Chhatwal, Julie C Price, Aaron P Schultz, Dorene M Rentz, Rebecca E Amariglio, Keith A Johnson, Rebecca F Gottesman, Shubhabrata Mukherjee, Paul Maruff, Yen Ying Lim, Colin L Masters, Alexa Beiser, Susan M Resnick, Timothy M Hughes, Samantha Burnham, Ilke Tunali, Susan Landau, Ann D Cohen, Sterling C Johnson, Tobey J Betthauser, Sudha Seshadri, Samuel N Lockhart, Sid E O'Bryant, Prashanthi Vemuri, Reisa A Sperling, Timothy J Hohman, Michael C Donohue, Rachel F Buckley","doi":"10.1186/s40708-024-00249-4","DOIUrl":"10.1186/s40708-024-00249-4","url":null,"abstract":"<p><p>Cognitive resilience (CR) describes the phenomenon of individuals evading cognitive decline despite prominent Alzheimer's disease neuropathology. Operationalization and measurement of this latent construct is non-trivial as it cannot be directly observed. The residual approach has been widely applied to estimate CR, where the degree of resilience is estimated through a linear model's residuals. We demonstrate that this approach makes specific, uncontrollable assumptions and likely leads to biased and erroneous resilience estimates. This is especially true when information about CR is contained in the data the linear model was fitted to, either through inclusion of CR-associated variables or due to correlation. We propose an alternative strategy which overcomes the standard approach's limitations using machine learning principles. Our proposed approach makes fewer assumptions about the data and CR and achieves better estimation accuracy on simulated ground-truth data.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"12 1","pages":"3"},"PeriodicalIF":0.0,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11772644/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143053883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-20DOI: 10.1186/s40708-024-00248-5
Xiaofu He, Yian Wang, Yutong Gao, Xuchen Wang, Zhixiong Sun, Huixiang Zhu, Kam W Leong, Bin Xu
Calcium plays an important role in regulating various neuronal activities in human brains. Investigating the dynamics of the calcium level in neurons is essential not just for understanding the pathophysiology of neuropsychiatric disorders but also as a quantitative gauge to evaluate the influence of drugs on neuron activities. Accessing human brain tissue to study neuron activities has historically been challenging due to ethical concerns. However, a significant breakthrough in the field has emerged with the advent of utilizing patient-derived human induced pluripotent stem cells (iPSCs) to culture neurons and develop brain organoids. This innovative approach provides a promising modeling system to overcome these critical obstacles. Many robust calcium imaging analysis tools have been developed for calcium activity analysis. However, most of the tools are designed for calcium signal detection only. There are limited choices for in-depth downstream applications, particularly in discerning differences between patient and normal calcium dynamics and their responses to drug treatment obtained from human iPSC-based models. Moreover, end-user researchers usually face a considerable challenge in mastering the entire analysis procedure and obtaining critical outputs due to the steep learning curve associated with these available tools. Therefore, we developed CalciumZero, a user-friendly toolbox to satisfy the unmet needs in calcium activity studies in human iPSC-based 3D-organoid/neurosphere models. CalciumZero includes a graphical user interface (GUI), which provides end-user iconic visualization and smooth adjustments on parameter tuning. It streamlines the entire analysis process, offering full automation with just one click after parameter optimization. In addition, it includes supplementary features to statistically evaluate the impact on disease etiology and the detection of drug candidate effects on calcium activities. These evaluations will enhance the analysis of imaging data obtained from patient iPSC-derived brain organoid/neurosphere models, providing a more comprehensive understanding of the results.
{"title":"CalciumZero: a toolbox for fluorescence calcium imaging on iPSC derived brain organoids.","authors":"Xiaofu He, Yian Wang, Yutong Gao, Xuchen Wang, Zhixiong Sun, Huixiang Zhu, Kam W Leong, Bin Xu","doi":"10.1186/s40708-024-00248-5","DOIUrl":"10.1186/s40708-024-00248-5","url":null,"abstract":"<p><p>Calcium plays an important role in regulating various neuronal activities in human brains. Investigating the dynamics of the calcium level in neurons is essential not just for understanding the pathophysiology of neuropsychiatric disorders but also as a quantitative gauge to evaluate the influence of drugs on neuron activities. Accessing human brain tissue to study neuron activities has historically been challenging due to ethical concerns. However, a significant breakthrough in the field has emerged with the advent of utilizing patient-derived human induced pluripotent stem cells (iPSCs) to culture neurons and develop brain organoids. This innovative approach provides a promising modeling system to overcome these critical obstacles. Many robust calcium imaging analysis tools have been developed for calcium activity analysis. However, most of the tools are designed for calcium signal detection only. There are limited choices for in-depth downstream applications, particularly in discerning differences between patient and normal calcium dynamics and their responses to drug treatment obtained from human iPSC-based models. Moreover, end-user researchers usually face a considerable challenge in mastering the entire analysis procedure and obtaining critical outputs due to the steep learning curve associated with these available tools. Therefore, we developed CalciumZero, a user-friendly toolbox to satisfy the unmet needs in calcium activity studies in human iPSC-based 3D-organoid/neurosphere models. CalciumZero includes a graphical user interface (GUI), which provides end-user iconic visualization and smooth adjustments on parameter tuning. It streamlines the entire analysis process, offering full automation with just one click after parameter optimization. In addition, it includes supplementary features to statistically evaluate the impact on disease etiology and the detection of drug candidate effects on calcium activities. These evaluations will enhance the analysis of imaging data obtained from patient iPSC-derived brain organoid/neurosphere models, providing a more comprehensive understanding of the results.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"12 1","pages":"2"},"PeriodicalIF":0.0,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11746984/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A digital twin is a virtual model of a real-world system that updates in real-time. In healthcare, digital twins are gaining popularity for monitoring activities like diet, physical activity, and sleep. However, their application in predicting serious conditions such as heart attacks, brain strokes and cancers remains under investigation, with current research showing limited accuracy in such predictions. Moreover, concerns around data security and privacy continue to challenge the widespread adoption of these models. To address these challenges, we developed a secure, machine learning powered digital twin application with three main objectives enhancing prediction accuracy, strengthening security, and ensuring scalability. The application achieved an accuracy of 98.28% for brain stroke prediction on the selected dataset. The data security was enhanced by integrating consortium blockchain technology with machine learning. The results show that the application is tamper-proof and is capable of detecting and automatically correcting backend data anomalies to maintain robust data protection. The application can be extended to monitor other pathologies such as heart attacks, cancers, osteoporosis, and epilepsy with minimal configuration changes.
{"title":"Blockchain-enabled digital twin system for brain stroke prediction.","authors":"Venkatesh Upadrista, Sajid Nazir, Huaglory Tianfield","doi":"10.1186/s40708-024-00247-6","DOIUrl":"10.1186/s40708-024-00247-6","url":null,"abstract":"<p><p>A digital twin is a virtual model of a real-world system that updates in real-time. In healthcare, digital twins are gaining popularity for monitoring activities like diet, physical activity, and sleep. However, their application in predicting serious conditions such as heart attacks, brain strokes and cancers remains under investigation, with current research showing limited accuracy in such predictions. Moreover, concerns around data security and privacy continue to challenge the widespread adoption of these models. To address these challenges, we developed a secure, machine learning powered digital twin application with three main objectives enhancing prediction accuracy, strengthening security, and ensuring scalability. The application achieved an accuracy of 98.28% for brain stroke prediction on the selected dataset. The data security was enhanced by integrating consortium blockchain technology with machine learning. The results show that the application is tamper-proof and is capable of detecting and automatically correcting backend data anomalies to maintain robust data protection. The application can be extended to monitor other pathologies such as heart attacks, cancers, osteoporosis, and epilepsy with minimal configuration changes.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"12 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11732804/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142980170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-18DOI: 10.1186/s40708-024-00244-9
Maria Luigia Natalia De Bonis, Giuseppe Fasano, Angela Lombardi, Carmelo Ardito, Antonio Ferrara, Eugenio Di Sciascio, Tommaso Di Noia
Brain age, a biomarker reflecting brain health relative to chronological age, is increasingly used in neuroimaging to detect early signs of neurodegenerative diseases and support personalized treatment plans. Two primary approaches for brain age prediction have emerged: morphometric feature extraction from MRI scans and deep learning (DL) applied to raw MRI data. However, a systematic comparison of these methods regarding performance, interpretability, and clinical utility has been limited. In this study, we present a comparative evaluation of two pipelines: one using morphometric features from FreeSurfer and the other employing 3D convolutional neural networks (CNNs). Using a multisite neuroimaging dataset, we assessed both model performance and the interpretability of predictions through eXplainable Artificial Intelligence (XAI) methods, applying SHAP to the feature-based pipeline and Grad-CAM and DeepSHAP to the CNN-based pipeline. Our results show comparable performance between the two pipelines in Leave-One-Site-Out (LOSO) validation, achieving state-of-the-art performance on the independent test set ( with DNN and morphometric features and with a DenseNet-121 architecture). SHAP provided the most consistent and interpretable results, while DeepSHAP exhibited greater variability. Further work is needed to assess the clinical utility of Grad-CAM. This study addresses a critical gap by systematically comparing the interpretability of multiple XAI methods across distinct brain age prediction pipelines. Our findings underscore the importance of integrating XAI into clinical practice, offering insights into how XAI outputs vary and their potential utility for clinicians.
{"title":"Explainable brain age prediction: a comparative evaluation of morphometric and deep learning pipelines.","authors":"Maria Luigia Natalia De Bonis, Giuseppe Fasano, Angela Lombardi, Carmelo Ardito, Antonio Ferrara, Eugenio Di Sciascio, Tommaso Di Noia","doi":"10.1186/s40708-024-00244-9","DOIUrl":"10.1186/s40708-024-00244-9","url":null,"abstract":"<p><p>Brain age, a biomarker reflecting brain health relative to chronological age, is increasingly used in neuroimaging to detect early signs of neurodegenerative diseases and support personalized treatment plans. Two primary approaches for brain age prediction have emerged: morphometric feature extraction from MRI scans and deep learning (DL) applied to raw MRI data. However, a systematic comparison of these methods regarding performance, interpretability, and clinical utility has been limited. In this study, we present a comparative evaluation of two pipelines: one using morphometric features from FreeSurfer and the other employing 3D convolutional neural networks (CNNs). Using a multisite neuroimaging dataset, we assessed both model performance and the interpretability of predictions through eXplainable Artificial Intelligence (XAI) methods, applying SHAP to the feature-based pipeline and Grad-CAM and DeepSHAP to the CNN-based pipeline. Our results show comparable performance between the two pipelines in Leave-One-Site-Out (LOSO) validation, achieving state-of-the-art performance on the independent test set ( <math><mrow><mi>M</mi> <mi>A</mi> <mi>E</mi> <mo>=</mo> <mn>3.21</mn></mrow> </math> with DNN and morphometric features and <math><mrow><mi>M</mi> <mi>A</mi> <mi>E</mi> <mo>=</mo> <mn>3.08</mn></mrow> </math> with a DenseNet-121 architecture). SHAP provided the most consistent and interpretable results, while DeepSHAP exhibited greater variability. Further work is needed to assess the clinical utility of Grad-CAM. This study addresses a critical gap by systematically comparing the interpretability of multiple XAI methods across distinct brain age prediction pipelines. Our findings underscore the importance of integrating XAI into clinical practice, offering insights into how XAI outputs vary and their potential utility for clinicians.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"11 1","pages":"33"},"PeriodicalIF":0.0,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11655902/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142847751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-18DOI: 10.1186/s40708-024-00246-7
Hong Zhang, Zhikang Lu, Peicong Gong, Shilong Zhang, Xiaoquan Yang, Xiangning Li, Zhao Feng, Anan Li, Chi Xiao
High-throughput mesoscopic optical imaging technology has tremendously boosted the efficiency of procuring massive mesoscopic datasets from mouse brains. Constrained by the imaging field of view, the image strips obtained by such technologies typically require further processing, such as cross-sectional stitching, artifact removal, and signal area cropping, to meet the requirements of subsequent analyse. However, obtaining a batch of raw array mouse brain data at a resolution of can reach 220TB, and the cropping of the outer contour areas in the disjointed processing still relies on manual visual observation, which consumes substantial computational resources and labor costs. In this paper, we design an efficient deep differential guided filtering module (DDGF) by fusing multi-scale iterative differential guided filtering with deep learning, which effectively refines image details while mitigating background noise. Subsequently, by amalgamating DDGF with deep learning network, we propose a lightweight deep differential guided filtering segmentation network (DDGF-SegNet), which demonstrates robust performance on our dataset, achieving Dice of 0.92, Precision of 0.98, Recall of 0.91, and Jaccard index of 0.86. Building on the segmentation, we utilize connectivity analysis for ascertaining three-dimensional spatial orientation of each brain within the array. Furthermore, we streamline the entire processing workflow by developing an automated pipeline optimized for cluster-based message passing interface(MPI) parallel computation, which reduces the processing time for a mouse brain dataset to a mere 1.1 h, enhancing manual efficiency by 25 times and overall data processing efficiency by 2.4 times, paving the way for enhancing the efficiency of big data processing and parsing for high-throughput mesoscopic optical imaging techniques.
{"title":"High-throughput mesoscopic optical imaging data processing and parsing using differential-guided filtered neural networks.","authors":"Hong Zhang, Zhikang Lu, Peicong Gong, Shilong Zhang, Xiaoquan Yang, Xiangning Li, Zhao Feng, Anan Li, Chi Xiao","doi":"10.1186/s40708-024-00246-7","DOIUrl":"10.1186/s40708-024-00246-7","url":null,"abstract":"<p><p>High-throughput mesoscopic optical imaging technology has tremendously boosted the efficiency of procuring massive mesoscopic datasets from mouse brains. Constrained by the imaging field of view, the image strips obtained by such technologies typically require further processing, such as cross-sectional stitching, artifact removal, and signal area cropping, to meet the requirements of subsequent analyse. However, obtaining a batch of raw array mouse brain data at a resolution of <math><mrow><mn>0.65</mn> <mo>×</mo> <mn>0.65</mn> <mo>×</mo> <mn>3</mn> <mspace></mspace> <mi>μ</mi> <msup><mtext>m</mtext> <mn>3</mn></msup> </mrow> </math> can reach 220TB, and the cropping of the outer contour areas in the disjointed processing still relies on manual visual observation, which consumes substantial computational resources and labor costs. In this paper, we design an efficient deep differential guided filtering module (DDGF) by fusing multi-scale iterative differential guided filtering with deep learning, which effectively refines image details while mitigating background noise. Subsequently, by amalgamating DDGF with deep learning network, we propose a lightweight deep differential guided filtering segmentation network (DDGF-SegNet), which demonstrates robust performance on our dataset, achieving Dice of 0.92, Precision of 0.98, Recall of 0.91, and Jaccard index of 0.86. Building on the segmentation, we utilize connectivity analysis for ascertaining three-dimensional spatial orientation of each brain within the array. Furthermore, we streamline the entire processing workflow by developing an automated pipeline optimized for cluster-based message passing interface(MPI) parallel computation, which reduces the processing time for a mouse brain dataset to a mere 1.1 h, enhancing manual efficiency by 25 times and overall data processing efficiency by 2.4 times, paving the way for enhancing the efficiency of big data processing and parsing for high-throughput mesoscopic optical imaging techniques.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"11 1","pages":"32"},"PeriodicalIF":0.0,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11655801/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142847757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-18DOI: 10.1186/s40708-024-00245-8
Yihang Dong, Changhong Jing, Mufti Mahmud, Michael Kwok-Po Ng, Shuqiang Wang
Affective computing is a key research area in computer science, neuroscience, and psychology, aimed at enabling computers to recognize, understand, and respond to human emotional states. As the demand for affective computing technology grows, emotion recognition methods based on physiological signals have become research hotspots. Among these, electroencephalogram (EEG) signals, which reflect brain activity, are highly promising. However, due to individual physiological and anatomical differences, EEG signals introduce noise, reducing emotion recognition performance. Additionally, the synchronous collection of multimodal data in practical applications requires high equipment and environmental standards, limiting the practical use of EEG signals. To address these issues, this study proposes the Emotion Preceptor, a cross-subject emotion recognition model based on unimodal EEG signals. This model introduces a Static Spatial Adapter to integrate spatial information in EEG signals, reducing individual differences and extracting robust encoding information. The Temporal Causal Network then leverages temporal information to extract beneficial features for emotion recognition, achieving precise recognition based on unimodal EEG signals. Extensive experiments on the SEED and SEED-V datasets demonstrate the superior performance of the Emotion Preceptor and validate the effectiveness of the new data processing method that combines DE features in a temporal sequence. Additionally, we analyzed the model's data flow and encoding methods from a biological interpretability perspective and validated it with neuroscience research related to emotion generation and regulation, promoting further development in emotion recognition research based on EEG signals.
{"title":"Enhancing cross-subject emotion recognition precision through unimodal EEG: a novel emotion preceptor model.","authors":"Yihang Dong, Changhong Jing, Mufti Mahmud, Michael Kwok-Po Ng, Shuqiang Wang","doi":"10.1186/s40708-024-00245-8","DOIUrl":"10.1186/s40708-024-00245-8","url":null,"abstract":"<p><p>Affective computing is a key research area in computer science, neuroscience, and psychology, aimed at enabling computers to recognize, understand, and respond to human emotional states. As the demand for affective computing technology grows, emotion recognition methods based on physiological signals have become research hotspots. Among these, electroencephalogram (EEG) signals, which reflect brain activity, are highly promising. However, due to individual physiological and anatomical differences, EEG signals introduce noise, reducing emotion recognition performance. Additionally, the synchronous collection of multimodal data in practical applications requires high equipment and environmental standards, limiting the practical use of EEG signals. To address these issues, this study proposes the Emotion Preceptor, a cross-subject emotion recognition model based on unimodal EEG signals. This model introduces a Static Spatial Adapter to integrate spatial information in EEG signals, reducing individual differences and extracting robust encoding information. The Temporal Causal Network then leverages temporal information to extract beneficial features for emotion recognition, achieving precise recognition based on unimodal EEG signals. Extensive experiments on the SEED and SEED-V datasets demonstrate the superior performance of the Emotion Preceptor and validate the effectiveness of the new data processing method that combines DE features in a temporal sequence. Additionally, we analyzed the model's data flow and encoding methods from a biological interpretability perspective and validated it with neuroscience research related to emotion generation and regulation, promoting further development in emotion recognition research based on EEG signals.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"11 1","pages":"31"},"PeriodicalIF":0.0,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11655793/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142847681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-18DOI: 10.1186/s40708-024-00242-x
Rui Li, Xuanwen Yang, Jun Lou, Junsong Zhang
EEG-based emotion recognition uses high-level information from neural activities to predict emotional responses in subjects. However, this information is sparsely distributed in frequency, time, and spatial domains and varied across subjects. To address these challenges in emotion recognition, we propose a novel neural network model named Temporal-Spectral Graph Convolutional Network (TSGCN). To capture high-level information distributed in time, spatial, and frequency domains, TSGCN considers both neural oscillation changes in different time windows and topological structures between different brain regions. Specifically, a Minimum Category Confusion (MCC) loss is used in TSGCN to reduce the inconsistencies between subjective ratings and predefined labels. In addition, to improve the generalization of TSGCN on cross-subject variation, we propose Deep and Shallow feature Dynamic Adversarial Learning (DSDAL) to calculate the distance between the source domain and the target domain. Extensive experiments were conducted on public datasets to demonstrate that TSGCN outperforms state-of-the-art methods in EEG-based emotion recognition. Ablation studies show that the mixed neural networks and our proposed methods in TSGCN significantly contribute to its high performance and robustness. Detailed investigations further provide the effectiveness of TSGCN in addressing the challenges in emotion recognition.
基于脑电图的情绪识别使用来自神经活动的高级信息来预测受试者的情绪反应。然而,这些信息在频率、时间和空间领域是稀疏分布的,并且在不同学科之间存在差异。为了解决情感识别中的这些挑战,我们提出了一种新的神经网络模型——时间谱图卷积网络(TSGCN)。为了捕获分布在时间、空间和频域的高级信息,TSGCN既考虑了不同时间窗的神经振荡变化,也考虑了不同脑区之间的拓扑结构。具体来说,TSGCN中使用了最小类别混淆(MCC)损失来减少主观评分和预定义标签之间的不一致性。此外,为了提高TSGCN对跨主题变化的泛化能力,我们提出了Deep and Shallow feature Dynamic Adversarial Learning (DSDAL)来计算源域和目标域之间的距离。在公共数据集上进行了大量的实验,以证明TSGCN在基于脑电图的情感识别中优于最先进的方法。研究表明,混合神经网络和我们提出的方法对TSGCN的高性能和鲁棒性有显著的贡献。详细的研究进一步证明了TSGCN在解决情感识别挑战方面的有效性。
{"title":"A temporal-spectral graph convolutional neural network model for EEG emotion recognition within and across subjects.","authors":"Rui Li, Xuanwen Yang, Jun Lou, Junsong Zhang","doi":"10.1186/s40708-024-00242-x","DOIUrl":"10.1186/s40708-024-00242-x","url":null,"abstract":"<p><p>EEG-based emotion recognition uses high-level information from neural activities to predict emotional responses in subjects. However, this information is sparsely distributed in frequency, time, and spatial domains and varied across subjects. To address these challenges in emotion recognition, we propose a novel neural network model named Temporal-Spectral Graph Convolutional Network (TSGCN). To capture high-level information distributed in time, spatial, and frequency domains, TSGCN considers both neural oscillation changes in different time windows and topological structures between different brain regions. Specifically, a Minimum Category Confusion (MCC) loss is used in TSGCN to reduce the inconsistencies between subjective ratings and predefined labels. In addition, to improve the generalization of TSGCN on cross-subject variation, we propose Deep and Shallow feature Dynamic Adversarial Learning (DSDAL) to calculate the distance between the source domain and the target domain. Extensive experiments were conducted on public datasets to demonstrate that TSGCN outperforms state-of-the-art methods in EEG-based emotion recognition. Ablation studies show that the mixed neural networks and our proposed methods in TSGCN significantly contribute to its high performance and robustness. Detailed investigations further provide the effectiveness of TSGCN in addressing the challenges in emotion recognition.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"11 1","pages":"30"},"PeriodicalIF":0.0,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11655824/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142847680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-05DOI: 10.1186/s40708-024-00243-w
Tianhua Chen
The mental health of students in higher education has been a growing concern, with increasing evidence pointing to heightened risks of developing mental health condition. This research aims to explore whether day-long heart rate sequences, collected continuously through Apple Watch in an open environment without restrictions on daily routines, can effectively indicate mental states, particularly stress for university students. While heart rate (HR) is commonly used to monitor physical activity or responses to isolated stimuli in a controlled setting, such as stress-inducing tests, this study addresses the gap by analyzing heart rate fluctuations throughout a day, examining their potential to gauge overall stress levels in a more comprehensive and real-world context. The data for this research was collected at a public university in the UK. Using signal processing, both original heart rate sequences and their representations, via Fourier transformation and wavelet analysis, have been modeled using advanced machine learning algorithms. Having achieving statistically significant results over the baseline, this provides a understanding of how heart rate sequences alone may be used to characterize mental states through signal processing and machine learning, with the system poised for further testing as the ongoing data collection continues.
{"title":"Can heart rate sequences from wearable devices predict day-long mental states in higher education students: a signal processing and machine learning case study at a UK university.","authors":"Tianhua Chen","doi":"10.1186/s40708-024-00243-w","DOIUrl":"10.1186/s40708-024-00243-w","url":null,"abstract":"<p><p>The mental health of students in higher education has been a growing concern, with increasing evidence pointing to heightened risks of developing mental health condition. This research aims to explore whether day-long heart rate sequences, collected continuously through Apple Watch in an open environment without restrictions on daily routines, can effectively indicate mental states, particularly stress for university students. While heart rate (HR) is commonly used to monitor physical activity or responses to isolated stimuli in a controlled setting, such as stress-inducing tests, this study addresses the gap by analyzing heart rate fluctuations throughout a day, examining their potential to gauge overall stress levels in a more comprehensive and real-world context. The data for this research was collected at a public university in the UK. Using signal processing, both original heart rate sequences and their representations, via Fourier transformation and wavelet analysis, have been modeled using advanced machine learning algorithms. Having achieving statistically significant results over the baseline, this provides a understanding of how heart rate sequences alone may be used to characterize mental states through signal processing and machine learning, with the system poised for further testing as the ongoing data collection continues.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"11 1","pages":"29"},"PeriodicalIF":0.0,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11621279/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142787355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-21DOI: 10.1186/s40708-024-00241-y
Tianning Li, Yi Huang, Peng Wen, Yan Li
Accurate monitoring of the depth of anesthesia (DoA) is essential for ensuring patient safety and effective anesthesia management. Existing methods, such as the Bispectral Index (BIS), are limited in real-time accuracy and robustness. Current methods have problems in generalizability across diverse patient datasets and are sensitive to artifacts, making it difficult to provide reliable DoA assessments in real time. This study proposes a novel method for DoA monitoring using EEG signals, focusing on accuracy, robustness, and real-time application. EEG signals were pre-processed using wavelet denoising and discrete wavelet transform (DWT). Features such as Permutation Lempel-Ziv Complexity (PLZC) and Power Spectral Density (PSD) were extracted. A random forest regression model was employed to estimate anesthetic states, and an unsupervised learning method using the Hurst exponent algorithm and hierarchical clustering was introduced to detect transitions between anesthesia states. The method was tested on two independent datasets (UniSQ and VitalDB), achieving an average Pearson correlation coefficient of 0.86 and 0.82, respectively. For the combined dataset, the model demonstrated an R-squared value of 0.70, a RMSE of 6.31, a MAE of 8.38, and a Pearson correlation of 0.84, showcasing its robustness and generalizability. This approach offers a more accurate and reliable real-time DoA monitoring tool that could significantly improve patient safety and anesthesia management, especially in diverse clinical environments.
准确监测麻醉深度(DoA)对于确保患者安全和有效的麻醉管理至关重要。现有的方法,如双谱指数(BIS),在实时准确性和稳健性方面受到限制。目前的方法在不同患者数据集之间的通用性存在问题,而且对伪影很敏感,因此很难实时提供可靠的 DoA 评估。本研究提出了一种利用脑电信号监测 DoA 的新方法,重点关注准确性、鲁棒性和实时应用。使用小波去噪和离散小波变换(DWT)对脑电信号进行预处理。提取的特征包括珀尔帖-伦佩尔-齐夫复杂度(PLZC)和功率谱密度(PSD)。采用随机森林回归模型来估计麻醉状态,并使用赫斯特指数算法和分层聚类的无监督学习方法来检测麻醉状态之间的转换。该方法在两个独立数据集(UniSQ 和 VitalDB)上进行了测试,平均皮尔逊相关系数分别为 0.86 和 0.82。对于综合数据集,该模型的 R 方值为 0.70,RMSE 为 6.31,MAE 为 8.38,Pearson 相关性为 0.84,显示了其稳健性和普适性。这种方法提供了一种更准确、更可靠的实时 DoA 监测工具,可显著改善患者安全和麻醉管理,尤其是在不同的临床环境中。
{"title":"Accurate depth of anesthesia monitoring based on EEG signal complexity and frequency features.","authors":"Tianning Li, Yi Huang, Peng Wen, Yan Li","doi":"10.1186/s40708-024-00241-y","DOIUrl":"10.1186/s40708-024-00241-y","url":null,"abstract":"<p><p>Accurate monitoring of the depth of anesthesia (DoA) is essential for ensuring patient safety and effective anesthesia management. Existing methods, such as the Bispectral Index (BIS), are limited in real-time accuracy and robustness. Current methods have problems in generalizability across diverse patient datasets and are sensitive to artifacts, making it difficult to provide reliable DoA assessments in real time. This study proposes a novel method for DoA monitoring using EEG signals, focusing on accuracy, robustness, and real-time application. EEG signals were pre-processed using wavelet denoising and discrete wavelet transform (DWT). Features such as Permutation Lempel-Ziv Complexity (PLZC) and Power Spectral Density (PSD) were extracted. A random forest regression model was employed to estimate anesthetic states, and an unsupervised learning method using the Hurst exponent algorithm and hierarchical clustering was introduced to detect transitions between anesthesia states. The method was tested on two independent datasets (UniSQ and VitalDB), achieving an average Pearson correlation coefficient of 0.86 and 0.82, respectively. For the combined dataset, the model demonstrated an R-squared value of 0.70, a RMSE of 6.31, a MAE of 8.38, and a Pearson correlation of 0.84, showcasing its robustness and generalizability. This approach offers a more accurate and reliable real-time DoA monitoring tool that could significantly improve patient safety and anesthesia management, especially in diverse clinical environments.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"11 1","pages":"28"},"PeriodicalIF":0.0,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11582228/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142682781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-26DOI: 10.1186/s40708-024-00239-6
Mats Tveter, Thomas Tveitstøl, Christoffer Hatlestad-Hall, Ana S Pérez T, Erik Taubøll, Anis Yazidi, Hugo L Hammer, Ira R J Hebold Haraldsen
Deep Learning (DL) has the potential to enhance patient outcomes in healthcare by implementing proficient systems for disease detection and diagnosis. However, the complexity and lack of interpretability impede their widespread adoption in critical high-stakes predictions in healthcare. Incorporating uncertainty estimations in DL systems can increase trustworthiness, providing valuable insights into the model's confidence and improving the explanation of predictions. Additionally, introducing explainability measures, recognized and embraced by healthcare experts, can help address this challenge. In this study, we investigate DL models' ability to predict sex directly from electroencephalography (EEG) data. While sex prediction have limited direct clinical application, its binary nature makes it a valuable benchmark for optimizing deep learning techniques in EEG data analysis. Furthermore, we explore the use of DL ensembles to improve performance over single models and as an approach to increase interpretability and performance through uncertainty estimation. Lastly, we use a data-driven approach to evaluate the relationship between frequency bands and sex prediction, offering insights into their relative importance. InceptionNetwork, a single DL model, achieved 90.7% accuracy and an AUC of 0.947, and the best-performing ensemble, combining variations of InceptionNetwork and EEGNet, achieved 91.1% accuracy in predicting sex from EEG data using five-fold cross-validation. Uncertainty estimation through deep ensembles led to increased prediction performance, and the models were able to classify sex in all frequency bands, indicating sex-specific features across all bands.
{"title":"Advancing EEG prediction with deep learning and uncertainty estimation.","authors":"Mats Tveter, Thomas Tveitstøl, Christoffer Hatlestad-Hall, Ana S Pérez T, Erik Taubøll, Anis Yazidi, Hugo L Hammer, Ira R J Hebold Haraldsen","doi":"10.1186/s40708-024-00239-6","DOIUrl":"10.1186/s40708-024-00239-6","url":null,"abstract":"<p><p>Deep Learning (DL) has the potential to enhance patient outcomes in healthcare by implementing proficient systems for disease detection and diagnosis. However, the complexity and lack of interpretability impede their widespread adoption in critical high-stakes predictions in healthcare. Incorporating uncertainty estimations in DL systems can increase trustworthiness, providing valuable insights into the model's confidence and improving the explanation of predictions. Additionally, introducing explainability measures, recognized and embraced by healthcare experts, can help address this challenge. In this study, we investigate DL models' ability to predict sex directly from electroencephalography (EEG) data. While sex prediction have limited direct clinical application, its binary nature makes it a valuable benchmark for optimizing deep learning techniques in EEG data analysis. Furthermore, we explore the use of DL ensembles to improve performance over single models and as an approach to increase interpretability and performance through uncertainty estimation. Lastly, we use a data-driven approach to evaluate the relationship between frequency bands and sex prediction, offering insights into their relative importance. InceptionNetwork, a single DL model, achieved 90.7% accuracy and an AUC of 0.947, and the best-performing ensemble, combining variations of InceptionNetwork and EEGNet, achieved 91.1% accuracy in predicting sex from EEG data using five-fold cross-validation. Uncertainty estimation through deep ensembles led to increased prediction performance, and the models were able to classify sex in all frequency bands, indicating sex-specific features across all bands.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"11 1","pages":"27"},"PeriodicalIF":0.0,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11512943/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}