This paper aims to design distraction descriptor, elicited through the object variation, to refine the granular knowledge incrementally, using the proposed probability-based incremental update strategy in Incremental Fuzzy-Rough Nearest Neighbour (IncFRNN) technique. Most of the brainprint authentication models were tested in well-controlled environments to minimize the influence of ambient disturbance on the EEG signals. These settings significantly contradict the real-world situations. Thus, making use of the distraction is wiser than eliminating it. The proposed probability-based incremental update strategy is benchmarked with the ground truth (actual class) incremental update strategy. Besides, the proposed technique is also benchmarked with First-In-First-Out (FIFO) incremental update strategy in K-Nearest Neighbour (KNN). The experimental results have shown equivalence discriminatory performance in both high distraction and quiet conditions. This has proven that the proposed distraction descriptor is able to utilize the unique EEG response towards ambient distraction to complement person authentication modelling in uncontrolled environment. The proposed probability-based IncFRNN technique has significantly outperformed the KNN technique for both with and without defining the window size threshold. Nevertheless, its performance is slightly worse than the actual class incremental update strategy since the ground truth represents the gold standard. In overall, this study demonstrated a more practical brainprint authentication model with the proposed distraction descriptor and the probability-based incremental update strategy. However, the EEG distraction descriptor may vary due to intersession variability. Future research may focus on the intersession variability to enhance the robustness of the brainprint authentication model.
{"title":"Distraction descriptor for brainprint authentication modelling using probability-based Incremental Fuzzy-Rough Nearest Neighbour.","authors":"Siaw-Hong Liew, Yun-Huoy Choo, Yin Fen Low, Fadilla 'Atyka Nor Rashid","doi":"10.1186/s40708-023-00200-z","DOIUrl":"10.1186/s40708-023-00200-z","url":null,"abstract":"<p><p>This paper aims to design distraction descriptor, elicited through the object variation, to refine the granular knowledge incrementally, using the proposed probability-based incremental update strategy in Incremental Fuzzy-Rough Nearest Neighbour (IncFRNN) technique. Most of the brainprint authentication models were tested in well-controlled environments to minimize the influence of ambient disturbance on the EEG signals. These settings significantly contradict the real-world situations. Thus, making use of the distraction is wiser than eliminating it. The proposed probability-based incremental update strategy is benchmarked with the ground truth (actual class) incremental update strategy. Besides, the proposed technique is also benchmarked with First-In-First-Out (FIFO) incremental update strategy in K-Nearest Neighbour (KNN). The experimental results have shown equivalence discriminatory performance in both high distraction and quiet conditions. This has proven that the proposed distraction descriptor is able to utilize the unique EEG response towards ambient distraction to complement person authentication modelling in uncontrolled environment. The proposed probability-based IncFRNN technique has significantly outperformed the KNN technique for both with and without defining the window size threshold. Nevertheless, its performance is slightly worse than the actual class incremental update strategy since the ground truth represents the gold standard. In overall, this study demonstrated a more practical brainprint authentication model with the proposed distraction descriptor and the probability-based incremental update strategy. However, the EEG distraction descriptor may vary due to intersession variability. Future research may focus on the intersession variability to enhance the robustness of the brainprint authentication model.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"10 1","pages":"21"},"PeriodicalIF":0.0,"publicationDate":"2023-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10404212/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9951794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-04DOI: 10.1186/s40708-023-00199-3
Baraka Maiseli, Abdi T Abdalla, Libe V Massawe, Mercy Mbise, Khadija Mkocha, Nassor Ally Nassor, Moses Ismail, James Michael, Samwel Kimambo
Brain-computer interface (BCI), an emerging technology that facilitates communication between brain and computer, has attracted a great deal of research in recent years. Researchers provide experimental results demonstrating that BCI can restore the capabilities of physically challenged people, hence improving the quality of their lives. BCI has revolutionized and positively impacted several industries, including entertainment and gaming, automation and control, education, neuromarketing, and neuroergonomics. Notwithstanding its broad range of applications, the global trend of BCI remains lightly discussed in the literature. Understanding the trend may inform researchers and practitioners on the direction of the field, and on where they should invest their efforts more. Noting this significance, we have analyzed 25,336 metadata of BCI publications from Scopus to determine advancement of the field. The analysis shows an exponential growth of BCI publications in China from 2019 onwards, exceeding those from the United States that started to decline during the same period. Implications and reasons for this trend are discussed. Furthermore, we have extensively discussed challenges and threats limiting exploitation of BCI capabilities. A typical BCI architecture is hypothesized to address two prominent BCI threats, privacy and security, as an attempt to make the technology commercially viable to the society.
{"title":"Brain-computer interface: trend, challenges, and threats.","authors":"Baraka Maiseli, Abdi T Abdalla, Libe V Massawe, Mercy Mbise, Khadija Mkocha, Nassor Ally Nassor, Moses Ismail, James Michael, Samwel Kimambo","doi":"10.1186/s40708-023-00199-3","DOIUrl":"10.1186/s40708-023-00199-3","url":null,"abstract":"<p><p>Brain-computer interface (BCI), an emerging technology that facilitates communication between brain and computer, has attracted a great deal of research in recent years. Researchers provide experimental results demonstrating that BCI can restore the capabilities of physically challenged people, hence improving the quality of their lives. BCI has revolutionized and positively impacted several industries, including entertainment and gaming, automation and control, education, neuromarketing, and neuroergonomics. Notwithstanding its broad range of applications, the global trend of BCI remains lightly discussed in the literature. Understanding the trend may inform researchers and practitioners on the direction of the field, and on where they should invest their efforts more. Noting this significance, we have analyzed 25,336 metadata of BCI publications from Scopus to determine advancement of the field. The analysis shows an exponential growth of BCI publications in China from 2019 onwards, exceeding those from the United States that started to decline during the same period. Implications and reasons for this trend are discussed. Furthermore, we have extensively discussed challenges and threats limiting exploitation of BCI capabilities. A typical BCI architecture is hypothesized to address two prominent BCI threats, privacy and security, as an attempt to make the technology commercially viable to the society.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"10 1","pages":"20"},"PeriodicalIF":0.0,"publicationDate":"2023-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10403483/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9948607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-03DOI: 10.1186/s40708-023-00198-4
Hui Yen Yap, Yun-Huoy Choo, Zeratul Izzah Mohd Yusoh, Wee How Khoh
Electroencephalogram(EEG)-based authentication has received increasing attention from researchers as they believe it could serve as an alternative to more conventional personal authentication methods. Unfortunately, EEG signals are non-stationary and could be easily contaminated by noise and artifacts. Therefore, further processing of data analysis is needed to retrieve useful information. Various machine learning approaches have been proposed and implemented in the EEG-based domain, with deep learning being the most current trend. However, retaining the performance of a deep learning model requires substantial computational effort and a vast amount of data, especially when the models go deeper to generate consistent results. Deep learning models trained with small data sets from scratch may experience an overfitting issue. Transfer learning becomes an alternative solution. It is a technique to recognize and apply the knowledge and skills learned from the previous tasks to a new domain with limited training data. This study attempts to explore the applicability of transferring various pre-trained models' knowledge to the EEG-based authentication domain. A self-collected database that consists of 30 subjects was utilized in the analysis. The database enrolment is divided into two sessions, with each session producing two sets of EEG recording data. The frequency spectrums of the preprocessed EEG signals are extracted and fed into the pre-trained models as the input data. Three experimental tests are carried out and the best performance is reported with accuracy in the range of 99.1-99.9%. The acquired results demonstrate the efficiency of transfer learning in authenticating an individual in this domain.
{"title":"An evaluation of transfer learning models in EEG-based authentication.","authors":"Hui Yen Yap, Yun-Huoy Choo, Zeratul Izzah Mohd Yusoh, Wee How Khoh","doi":"10.1186/s40708-023-00198-4","DOIUrl":"10.1186/s40708-023-00198-4","url":null,"abstract":"<p><p>Electroencephalogram(EEG)-based authentication has received increasing attention from researchers as they believe it could serve as an alternative to more conventional personal authentication methods. Unfortunately, EEG signals are non-stationary and could be easily contaminated by noise and artifacts. Therefore, further processing of data analysis is needed to retrieve useful information. Various machine learning approaches have been proposed and implemented in the EEG-based domain, with deep learning being the most current trend. However, retaining the performance of a deep learning model requires substantial computational effort and a vast amount of data, especially when the models go deeper to generate consistent results. Deep learning models trained with small data sets from scratch may experience an overfitting issue. Transfer learning becomes an alternative solution. It is a technique to recognize and apply the knowledge and skills learned from the previous tasks to a new domain with limited training data. This study attempts to explore the applicability of transferring various pre-trained models' knowledge to the EEG-based authentication domain. A self-collected database that consists of 30 subjects was utilized in the analysis. The database enrolment is divided into two sessions, with each session producing two sets of EEG recording data. The frequency spectrums of the preprocessed EEG signals are extracted and fed into the pre-trained models as the input data. Three experimental tests are carried out and the best performance is reported with accuracy in the range of 99.1-99.9%. The acquired results demonstrate the efficiency of transfer learning in authenticating an individual in this domain.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"10 1","pages":"19"},"PeriodicalIF":0.0,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10400490/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9945274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human behaviour reflects cognitive abilities. Human cognition is fundamentally linked to the different experiences or characteristics of consciousness/emotions, such as joy, grief, anger, etc., which assists in effective communication with others. Detection and differentiation between thoughts, feelings, and behaviours are paramount in learning to control our emotions and respond more effectively in stressful circumstances. The ability to perceive, analyse, process, interpret, remember, and retrieve information while making judgments to respond correctly is referred to as Cognitive Behavior. After making a significant mark in emotion analysis, deception detection is one of the key areas to connect human behaviour, mainly in the forensic domain. Detection of lies, deception, malicious intent, abnormal behaviour, emotions, stress, etc., have significant roles in advanced stages of behavioral science. Artificial Intelligence and Machine learning (AI/ML) has helped a great deal in pattern recognition, data extraction and analysis, and interpretations. The goal of using AI and ML in behavioral sciences is to infer human behaviour, mainly for mental health or forensic investigations. The presented work provides an extensive review of the research on cognitive behaviour analysis. A parametric study is presented based on different physical characteristics, emotional behaviours, data collection sensing mechanisms, unimodal and multimodal datasets, modelling AI/ML methods, challenges, and future research directions.
{"title":"Machine learning for cognitive behavioral analysis: datasets, methods, paradigms, and research directions.","authors":"Priya Bhatt, Amanrose Sethi, Vaibhav Tasgaonkar, Jugal Shroff, Isha Pendharkar, Aditya Desai, Pratyush Sinha, Aditya Deshpande, Gargi Joshi, Anil Rahate, Priyanka Jain, Rahee Walambe, Ketan Kotecha, N K Jain","doi":"10.1186/s40708-023-00196-6","DOIUrl":"https://doi.org/10.1186/s40708-023-00196-6","url":null,"abstract":"<p><p>Human behaviour reflects cognitive abilities. Human cognition is fundamentally linked to the different experiences or characteristics of consciousness/emotions, such as joy, grief, anger, etc., which assists in effective communication with others. Detection and differentiation between thoughts, feelings, and behaviours are paramount in learning to control our emotions and respond more effectively in stressful circumstances. The ability to perceive, analyse, process, interpret, remember, and retrieve information while making judgments to respond correctly is referred to as Cognitive Behavior. After making a significant mark in emotion analysis, deception detection is one of the key areas to connect human behaviour, mainly in the forensic domain. Detection of lies, deception, malicious intent, abnormal behaviour, emotions, stress, etc., have significant roles in advanced stages of behavioral science. Artificial Intelligence and Machine learning (AI/ML) has helped a great deal in pattern recognition, data extraction and analysis, and interpretations. The goal of using AI and ML in behavioral sciences is to infer human behaviour, mainly for mental health or forensic investigations. The presented work provides an extensive review of the research on cognitive behaviour analysis. A parametric study is presented based on different physical characteristics, emotional behaviours, data collection sensing mechanisms, unimodal and multimodal datasets, modelling AI/ML methods, challenges, and future research directions.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"10 1","pages":"18"},"PeriodicalIF":0.0,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10390406/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9925684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-14DOI: 10.1186/s40708-023-00195-7
Akhilesh Deep Arya, Sourabh Singh Verma, Prasun Chakarabarti, Tulika Chakrabarti, Ahmed A Elngar, Ali-Mohammad Kamali, Mohammad Nami
Alzheimer's disease (AD) is a brain-related disease in which the condition of the patient gets worse with time. AD is not a curable disease by any medication. It is impossible to halt the death of brain cells, but with the help of medication, the effects of AD can be delayed. As not all MCI patients will suffer from AD, it is required to accurately diagnose whether a mild cognitive impaired (MCI) patient will convert to AD (namely MCI converter MCI-C) or not (namely MCI non-converter MCI-NC), during early diagnosis. There are two modalities, positron emission tomography (PET) and magnetic resonance image (MRI), used by a physician for the diagnosis of Alzheimer's disease. Machine learning and deep learning perform exceptionally well in the field of computer vision where there is a requirement to extract information from high-dimensional data. Researchers use deep learning models in the field of medicine for diagnosis, prognosis, and even to predict the future health of the patient under medication. This study is a systematic review of publications using machine learning and deep learning methods for early classification of normal cognitive (NC) and Alzheimer's disease (AD).This study is an effort to provide the details of the two most commonly used modalities PET and MRI for the identification of AD, and to evaluate the performance of both modalities while working with different classifiers.
{"title":"A systematic review on machine learning and deep learning techniques in the effective diagnosis of Alzheimer's disease.","authors":"Akhilesh Deep Arya, Sourabh Singh Verma, Prasun Chakarabarti, Tulika Chakrabarti, Ahmed A Elngar, Ali-Mohammad Kamali, Mohammad Nami","doi":"10.1186/s40708-023-00195-7","DOIUrl":"https://doi.org/10.1186/s40708-023-00195-7","url":null,"abstract":"<p><p>Alzheimer's disease (AD) is a brain-related disease in which the condition of the patient gets worse with time. AD is not a curable disease by any medication. It is impossible to halt the death of brain cells, but with the help of medication, the effects of AD can be delayed. As not all MCI patients will suffer from AD, it is required to accurately diagnose whether a mild cognitive impaired (MCI) patient will convert to AD (namely MCI converter MCI-C) or not (namely MCI non-converter MCI-NC), during early diagnosis. There are two modalities, positron emission tomography (PET) and magnetic resonance image (MRI), used by a physician for the diagnosis of Alzheimer's disease. Machine learning and deep learning perform exceptionally well in the field of computer vision where there is a requirement to extract information from high-dimensional data. Researchers use deep learning models in the field of medicine for diagnosis, prognosis, and even to predict the future health of the patient under medication. This study is a systematic review of publications using machine learning and deep learning methods for early classification of normal cognitive (NC) and Alzheimer's disease (AD).This study is an effort to provide the details of the two most commonly used modalities PET and MRI for the identification of AD, and to evaluate the performance of both modalities while working with different classifiers.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"10 1","pages":"17"},"PeriodicalIF":0.0,"publicationDate":"2023-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10349019/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10199573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-14DOI: 10.1186/s40708-023-00197-5
Sophie Adama, Martin Bogdan
Consciousness is something we experience in our everyday life, more especially between the time we wake up in the morning and go to sleep at night, but also during the rapid eye movement (REM) sleep stage. Disorders of consciousness (DoC) are states in which a person's consciousness is damaged, possibly after a traumatic brain injury. Completely locked-in syndrome (CLIS) patients, on the other hand, display covert states of consciousness. Although they appear unconscious, their cognitive functions are mostly intact. Only, they cannot externally display it due to their quadriplegia and inability to speak. Determining these patients' states constitutes a challenging task. The ultimate goal of the approach presented in this paper is to assess these CLIS patients consciousness states. EEG data from DoC patients are used here first, under the assumption that if the proposed approach is able to accurately assess their consciousness states, it will assuredly do so on CLIS patients too. This method combines different sets of features consisting of spectral, complexity and connectivity measures in order to increase the probability of correctly estimating their consciousness levels. The obtained results showed that the proposed approach was able to correctly estimate several DoC patients' consciousness levels. This estimation is intended as a step prior attempting to communicate with them, in order to maximise the efficiency of brain-computer interfaces (BCI)-based communication systems.
{"title":"Assessing consciousness in patients with disorders of consciousness using soft-clustering.","authors":"Sophie Adama, Martin Bogdan","doi":"10.1186/s40708-023-00197-5","DOIUrl":"https://doi.org/10.1186/s40708-023-00197-5","url":null,"abstract":"<p><p>Consciousness is something we experience in our everyday life, more especially between the time we wake up in the morning and go to sleep at night, but also during the rapid eye movement (REM) sleep stage. Disorders of consciousness (DoC) are states in which a person's consciousness is damaged, possibly after a traumatic brain injury. Completely locked-in syndrome (CLIS) patients, on the other hand, display covert states of consciousness. Although they appear unconscious, their cognitive functions are mostly intact. Only, they cannot externally display it due to their quadriplegia and inability to speak. Determining these patients' states constitutes a challenging task. The ultimate goal of the approach presented in this paper is to assess these CLIS patients consciousness states. EEG data from DoC patients are used here first, under the assumption that if the proposed approach is able to accurately assess their consciousness states, it will assuredly do so on CLIS patients too. This method combines different sets of features consisting of spectral, complexity and connectivity measures in order to increase the probability of correctly estimating their consciousness levels. The obtained results showed that the proposed approach was able to correctly estimate several DoC patients' consciousness levels. This estimation is intended as a step prior attempting to communicate with them, in order to maximise the efficiency of brain-computer interfaces (BCI)-based communication systems.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"10 1","pages":"16"},"PeriodicalIF":0.0,"publicationDate":"2023-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10348975/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9823514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-12DOI: 10.1186/s40708-023-00192-w
Alexander Hui Xiang Yang, Nikola Kirilov Kasabov, Yusuf Ozgur Cakmak
Virtual Reality (VR) allows users to interact with 3D immersive environments and has the potential to be a key technology across many domain applications, including access to a future metaverse. Yet, consumer adoption of VR technology is limited by cybersickness (CS)-a debilitating sensation accompanied by a cluster of symptoms, including nausea, oculomotor issues and dizziness. A leading problem is the lack of automated objective tools to predict or detect CS in individuals, which can then be used for resistance training, timely warning systems or clinical intervention. This paper explores the spatiotemporal brain dynamics and heart rate variability involved in cybersickness and uses this information to both predict and detect CS episodes. The present study applies deep learning of EEG in a spiking neural network (SNN) architecture to predict CS prior to using VR (85.9%, F7) and detect it (76.6%, FP1, Cz). ECG-derived sympathetic heart rate variability (HRV) parameters can be used for both prediction (74.2%) and detection (72.6%) but at a lower accuracy than EEG. Multimodal data fusion of EEG and sympathetic HRV does not change this accuracy compared to ECG alone. The study found that Cz (premotor and supplementary motor cortex) and O2 (primary visual cortex) are key hubs in functionally connected networks associated with both CS events and susceptibility to CS. F7 is also suggested as a key area involved in integrating information and implementing responses to incongruent environments that induce cybersickness. Consequently, Cz, O2 and F7 are presented here as promising targets for intervention.
{"title":"Prediction and detection of virtual reality induced cybersickness: a spiking neural network approach using spatiotemporal EEG brain data and heart rate variability.","authors":"Alexander Hui Xiang Yang, Nikola Kirilov Kasabov, Yusuf Ozgur Cakmak","doi":"10.1186/s40708-023-00192-w","DOIUrl":"https://doi.org/10.1186/s40708-023-00192-w","url":null,"abstract":"<p><p>Virtual Reality (VR) allows users to interact with 3D immersive environments and has the potential to be a key technology across many domain applications, including access to a future metaverse. Yet, consumer adoption of VR technology is limited by cybersickness (CS)-a debilitating sensation accompanied by a cluster of symptoms, including nausea, oculomotor issues and dizziness. A leading problem is the lack of automated objective tools to predict or detect CS in individuals, which can then be used for resistance training, timely warning systems or clinical intervention. This paper explores the spatiotemporal brain dynamics and heart rate variability involved in cybersickness and uses this information to both predict and detect CS episodes. The present study applies deep learning of EEG in a spiking neural network (SNN) architecture to predict CS prior to using VR (85.9%, F7) and detect it (76.6%, FP1, Cz). ECG-derived sympathetic heart rate variability (HRV) parameters can be used for both prediction (74.2%) and detection (72.6%) but at a lower accuracy than EEG. Multimodal data fusion of EEG and sympathetic HRV does not change this accuracy compared to ECG alone. The study found that Cz (premotor and supplementary motor cortex) and O2 (primary visual cortex) are key hubs in functionally connected networks associated with both CS events and susceptibility to CS. F7 is also suggested as a key area involved in integrating information and implementing responses to incongruent environments that induce cybersickness. Consequently, Cz, O2 and F7 are presented here as promising targets for intervention.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"10 1","pages":"15"},"PeriodicalIF":0.0,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10338414/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9806587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-21DOI: 10.1186/s40708-023-00193-9
Muhammad Arifur Rahman, David J Brown, Mufti Mahmud, Matthew Harris, Nicholas Shopland, Nadja Heym, Alexander Sumich, Zakia Batool Turabee, Bradley Standen, David Downes, Yangang Xing, Carolyn Thomas, Sean Haddick, Preethi Premkumar, Simona Nastase, Andrew Burton, James Lewis
Virtual reality exposure therapy (VRET) is a novel intervention technique that allows individuals to experience anxiety-evoking stimuli in a safe environment, recognise specific triggers and gradually increase their exposure to perceived threats. Public-speaking anxiety (PSA) is a prevalent form of social anxiety, characterised by stressful arousal and anxiety generated when presenting to an audience. In self-guided VRET, participants can gradually increase their tolerance to exposure and reduce anxiety-induced arousal and PSA over time. However, creating such a VR environment and determining physiological indices of anxiety-induced arousal or distress is an open challenge. Environment modelling, character creation and animation, psychological state determination and the use of machine learning (ML) models for anxiety or stress detection are equally important, and multi-disciplinary expertise is required. In this work, we have explored a series of ML models with publicly available data sets (using electroencephalogram and heart rate variability) to predict arousal states. If we can detect anxiety-induced arousal, we can trigger calming activities to allow individuals to cope with and overcome distress. Here, we discuss the means of effective selection of ML models and parameters in arousal detection. We propose a pipeline to overcome the model selection problem with different parameter settings in the context of virtual reality exposure therapy. This pipeline can be extended to other domains of interest where arousal detection is crucial. Finally, we have implemented a biofeedback framework for VRET where we successfully provided feedback as a form of heart rate and brain laterality index from our acquired multimodal data for psychological intervention to overcome anxiety.
{"title":"Enhancing biofeedback-driven self-guided virtual reality exposure therapy through arousal detection from multimodal data using machine learning.","authors":"Muhammad Arifur Rahman, David J Brown, Mufti Mahmud, Matthew Harris, Nicholas Shopland, Nadja Heym, Alexander Sumich, Zakia Batool Turabee, Bradley Standen, David Downes, Yangang Xing, Carolyn Thomas, Sean Haddick, Preethi Premkumar, Simona Nastase, Andrew Burton, James Lewis","doi":"10.1186/s40708-023-00193-9","DOIUrl":"https://doi.org/10.1186/s40708-023-00193-9","url":null,"abstract":"<p><p>Virtual reality exposure therapy (VRET) is a novel intervention technique that allows individuals to experience anxiety-evoking stimuli in a safe environment, recognise specific triggers and gradually increase their exposure to perceived threats. Public-speaking anxiety (PSA) is a prevalent form of social anxiety, characterised by stressful arousal and anxiety generated when presenting to an audience. In self-guided VRET, participants can gradually increase their tolerance to exposure and reduce anxiety-induced arousal and PSA over time. However, creating such a VR environment and determining physiological indices of anxiety-induced arousal or distress is an open challenge. Environment modelling, character creation and animation, psychological state determination and the use of machine learning (ML) models for anxiety or stress detection are equally important, and multi-disciplinary expertise is required. In this work, we have explored a series of ML models with publicly available data sets (using electroencephalogram and heart rate variability) to predict arousal states. If we can detect anxiety-induced arousal, we can trigger calming activities to allow individuals to cope with and overcome distress. Here, we discuss the means of effective selection of ML models and parameters in arousal detection. We propose a pipeline to overcome the model selection problem with different parameter settings in the context of virtual reality exposure therapy. This pipeline can be extended to other domains of interest where arousal detection is crucial. Finally, we have implemented a biofeedback framework for VRET where we successfully provided feedback as a form of heart rate and brain laterality index from our acquired multimodal data for psychological intervention to overcome anxiety.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"10 1","pages":"14"},"PeriodicalIF":0.0,"publicationDate":"2023-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10284788/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10086083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-07DOI: 10.1186/s40708-023-00194-8
Francisco Salto, Carmen Requena, Paula Alvarez-Merino, Víctor Rodríguez, Jesús Poza, Roberto Hornero
Introduction: Logically valid deductive arguments are clear examples of abstract recursive computational procedures on propositions or on probabilities. However, it is not known if the cortical time-consuming inferential processes in which logical arguments are eventually realized in the brain are in fact physically different from other kinds of inferential processes.
Methods: In order to determine whether an electrical EEG discernible pattern of logical deduction exists or not, a new experimental paradigm is proposed contrasting logically valid and invalid inferences with exactly the same content (same premises and same relational variables) and distinct logical complexity (propositional truth-functional operators). Electroencephalographic signals from 19 subjects (24.2 ± 3.3 years) were acquired in a two-condition paradigm (100 trials for each condition). After the initial general analysis, a trial-by-trial approach in beta-2 band allowed to uncover not only evoked but also phase asynchronous activity between trials.
Results: showed that (i) deductive inferences with the same content evoked the same response pattern in logically valid and invalid conditions, (ii) mean response time in logically valid inferences is 61.54% higher, (iii) logically valid inferences are subjected to an early (400 ms) and a late reprocessing (600 ms) verified by two distinct beta-2 activations (p-value < 0,01, Wilcoxon signed rank test).
Conclusion: We found evidence of a subtle but measurable electrical trait of logical validity. Results put forward the hypothesis that some logically valid deductions are recursive or computational cortical events.
{"title":"Electrical analysis of logical complexity: an exploratory eeg study of logically valid/invalid deducive inference.","authors":"Francisco Salto, Carmen Requena, Paula Alvarez-Merino, Víctor Rodríguez, Jesús Poza, Roberto Hornero","doi":"10.1186/s40708-023-00194-8","DOIUrl":"https://doi.org/10.1186/s40708-023-00194-8","url":null,"abstract":"<p><strong>Introduction: </strong>Logically valid deductive arguments are clear examples of abstract recursive computational procedures on propositions or on probabilities. However, it is not known if the cortical time-consuming inferential processes in which logical arguments are eventually realized in the brain are in fact physically different from other kinds of inferential processes.</p><p><strong>Methods: </strong>In order to determine whether an electrical EEG discernible pattern of logical deduction exists or not, a new experimental paradigm is proposed contrasting logically valid and invalid inferences with exactly the same content (same premises and same relational variables) and distinct logical complexity (propositional truth-functional operators). Electroencephalographic signals from 19 subjects (24.2 ± 3.3 years) were acquired in a two-condition paradigm (100 trials for each condition). After the initial general analysis, a trial-by-trial approach in beta-2 band allowed to uncover not only evoked but also phase asynchronous activity between trials.</p><p><strong>Results: </strong>showed that (i) deductive inferences with the same content evoked the same response pattern in logically valid and invalid conditions, (ii) mean response time in logically valid inferences is 61.54% higher, (iii) logically valid inferences are subjected to an early (400 ms) and a late reprocessing (600 ms) verified by two distinct beta-2 activations (p-value < 0,01, Wilcoxon signed rank test).</p><p><strong>Conclusion: </strong>We found evidence of a subtle but measurable electrical trait of logical validity. Results put forward the hypothesis that some logically valid deductions are recursive or computational cortical events.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"10 1","pages":"13"},"PeriodicalIF":0.0,"publicationDate":"2023-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10247637/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9602628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-08DOI: 10.1186/s40708-023-00191-x
Cécile Gauthier-Umaña, Mario Valderrama, Alejandro Múnera, Mauricio O Nava-Mesa
In order to understand the link between brain functional states and behavioral/cognitive processes, the information carried in neural oscillations can be retrieved using different analytic techniques. Processing these different bio-signals is a complex, time-consuming, and often non-automatized process that requires customization, due to the type of signal acquired, acquisition method implemented, and the objectives of each individual research group. To this end, a new graphical user interface (GUI), named BOARD-FTD-PACC, was developed and designed to facilitate the visualization, quantification, and analysis of neurophysiological recordings. BOARD-FTD-PACC provides different and customizable tools that facilitate the task of analyzing post-synaptic activity and complex neural oscillatory data, mainly cross-frequency analysis. It is a flexible and user-friendly software that can be used by a wide range of users to extract valuable information from neurophysiological signals such as phase-amplitude coupling and relative power spectral density, among others. BOARD-FTD-PACC allows researchers to select, in the same open-source GUI, different approaches and techniques that will help promote a better understanding of synaptic and oscillatory activity in specific brain structures with or without stimulation.
{"title":"BOARD-FTD-PACC: a graphical user interface for the synaptic and cross-frequency analysis derived from neural signals.","authors":"Cécile Gauthier-Umaña, Mario Valderrama, Alejandro Múnera, Mauricio O Nava-Mesa","doi":"10.1186/s40708-023-00191-x","DOIUrl":"https://doi.org/10.1186/s40708-023-00191-x","url":null,"abstract":"<p><p>In order to understand the link between brain functional states and behavioral/cognitive processes, the information carried in neural oscillations can be retrieved using different analytic techniques. Processing these different bio-signals is a complex, time-consuming, and often non-automatized process that requires customization, due to the type of signal acquired, acquisition method implemented, and the objectives of each individual research group. To this end, a new graphical user interface (GUI), named BOARD-FTD-PACC, was developed and designed to facilitate the visualization, quantification, and analysis of neurophysiological recordings. BOARD-FTD-PACC provides different and customizable tools that facilitate the task of analyzing post-synaptic activity and complex neural oscillatory data, mainly cross-frequency analysis. It is a flexible and user-friendly software that can be used by a wide range of users to extract valuable information from neurophysiological signals such as phase-amplitude coupling and relative power spectral density, among others. BOARD-FTD-PACC allows researchers to select, in the same open-source GUI, different approaches and techniques that will help promote a better understanding of synaptic and oscillatory activity in specific brain structures with or without stimulation.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"10 1","pages":"12"},"PeriodicalIF":0.0,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10167074/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9497706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}