Pub Date : 2025-09-01Epub Date: 2025-07-22DOI: 10.1016/j.neuri.2025.100220
Tursun Alkam, Andrew H Van Benschoten, Ebrahim Tarshizi
Reinforcement learning (RL), a computational framework rooted in behavioral psychology, enables agents to learn optimal actions through trial and error. It now powers intelligent systems across domains such as autonomous driving, robotics, and logistics, solving tasks once thought to require human cognition. As RL reshapes artificial intelligence (AI), it raises a critical question in neuroscience: does the brain learn through similar mechanisms? Growing evidence suggests it does.
To bridge this interdisciplinary gap, this review introduces core RL concepts to neuroscientists and clinicians with limited AI exposure. We outline the agent–environment interaction loop and describe key architectures including model-free, model-based, and meta-RL. We then examine how advances in deep RL have generated testable hypotheses about neural computation and behavior. In parallel, we discuss how neurobiological findings, especially the role of dopamine in encoding reward prediction errors, have inspired biologically grounded RL models. Empirical studies reveal neural correlates of RL algorithms in the basal ganglia, prefrontal cortex, and hippocampus, supporting their roles in planning, memory, and decision-making. We also highlight clinical applications, including how RL frameworks are used to model cognitive decline and psychiatric disorders, while acknowledging limitations in scaling RL to biological complexity.
Looking ahead, RL offers powerful tools for understanding brain function, guiding brain–machine interfaces, and personalizing psychiatric treatment. The convergence of RL and neuroscience offers a promising interdisciplinary lens for advancing our understanding of learning and decision-making in both artificial agents and the human brain.
{"title":"Reinforcement learning in artificial intelligence and neurobiology","authors":"Tursun Alkam, Andrew H Van Benschoten, Ebrahim Tarshizi","doi":"10.1016/j.neuri.2025.100220","DOIUrl":"10.1016/j.neuri.2025.100220","url":null,"abstract":"<div><div>Reinforcement learning (RL), a computational framework rooted in behavioral psychology, enables agents to learn optimal actions through trial and error. It now powers intelligent systems across domains such as autonomous driving, robotics, and logistics, solving tasks once thought to require human cognition. As RL reshapes artificial intelligence (AI), it raises a critical question in neuroscience: does the brain learn through similar mechanisms? Growing evidence suggests it does.</div><div>To bridge this interdisciplinary gap, this review introduces core RL concepts to neuroscientists and clinicians with limited AI exposure. We outline the agent–environment interaction loop and describe key architectures including model-free, model-based, and meta-RL. We then examine how advances in deep RL have generated testable hypotheses about neural computation and behavior. In parallel, we discuss how neurobiological findings, especially the role of dopamine in encoding reward prediction errors, have inspired biologically grounded RL models. Empirical studies reveal neural correlates of RL algorithms in the basal ganglia, prefrontal cortex, and hippocampus, supporting their roles in planning, memory, and decision-making. We also highlight clinical applications, including how RL frameworks are used to model cognitive decline and psychiatric disorders, while acknowledging limitations in scaling RL to biological complexity.</div><div>Looking ahead, RL offers powerful tools for understanding brain function, guiding brain–machine interfaces, and personalizing psychiatric treatment. The convergence of RL and neuroscience offers a promising interdisciplinary lens for advancing our understanding of learning and decision-making in both artificial agents and the human brain.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"5 3","pages":"Article 100220"},"PeriodicalIF":0.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144713172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-17DOI: 10.1016/j.neuri.2025.100216
Benjamin Segun Aribisala , Deirdre Edward , Godwin Ogbole , Onoja M. Akpa , Segun Ayilara , Fred Sarfo , Olusola Olabanjo , Adekunle Fakunle , Babafemi Oluropo Macaulay , Joseph Yaria , Joshua Akinyemi , Albert Akpalu , Kolawole Wahab , Reginald Obiako , Morenikeji Komolafe , Lukman Owolabi , Godwin Osaigbovo , Akinkunmi Paul Okekunle , Arti Singh , Philip Ibinaye , Mayowa Owolabi
Background
Stroke is the second leading cause of death and the third leading cause of disability globally, including Africa, which bears its largest burden. Accurate models are needed in Africa to predict and prevent stroke occurrence. The aim of this study was to identify the best machine learning (ML) algorithm for stroke prediction.
Methods
We assessed medical data of 4,236 subjects comprising 2,118 stroke patients and 2,118 controls from the SIREN database. Sixteen established vascular risk factors were evaluated in this study. These are addition of salt to food at table during eating, cardiac disease, diabetes mellitus, dyslipidemia, education, family history of cardiovascular disease, hypertension, income, low green leafy vegetable consumption, obesity, physical inactivity, regular meat consumption, regular sugar consumption, smoking, stress and use of tobacco. From these, we also selected the 11 topmost risk factors using Population-Attributable Risk ranking. Eleven ML models were built and empirically investigated using the 16 and the 11 risk factors.
Results
Our results showed that the 16 features-based classification (maximum AUC of 82.32%) had a slightly better performance than the 11 feature-based (maximum AUC 81.17%) algorithm. The result also showed that Artificial Neural Network (ANN) had the best performance amongst eleven algorithms investigated with AUC of 82.32%, sensitivity of 71.23%, specificity of 80.00%.
Conclusion
Machine Learning algorithms predicted stroke occurrence employing major risk factors in Sub-Saharan Africa better than regression models. Machine Learning, especially Artificial Neural Network, is recommended to enhance Afrocentric stroke prediction models for stroke risk factor quantification and control in Africa.
{"title":"Predicting stroke with machine learning techniques in a sub-Saharan African population","authors":"Benjamin Segun Aribisala , Deirdre Edward , Godwin Ogbole , Onoja M. Akpa , Segun Ayilara , Fred Sarfo , Olusola Olabanjo , Adekunle Fakunle , Babafemi Oluropo Macaulay , Joseph Yaria , Joshua Akinyemi , Albert Akpalu , Kolawole Wahab , Reginald Obiako , Morenikeji Komolafe , Lukman Owolabi , Godwin Osaigbovo , Akinkunmi Paul Okekunle , Arti Singh , Philip Ibinaye , Mayowa Owolabi","doi":"10.1016/j.neuri.2025.100216","DOIUrl":"10.1016/j.neuri.2025.100216","url":null,"abstract":"<div><h3>Background</h3><div>Stroke is the second leading cause of death and the third leading cause of disability globally, including Africa, which bears its largest burden. Accurate models are needed in Africa to predict and prevent stroke occurrence. The aim of this study was to identify the best machine learning (ML) algorithm for stroke prediction.</div></div><div><h3>Methods</h3><div>We assessed medical data of 4,236 subjects comprising 2,118 stroke patients and 2,118 controls from the SIREN database. Sixteen established vascular risk factors were evaluated in this study. These are addition of salt to food at table during eating, cardiac disease, diabetes mellitus, dyslipidemia, education, family history of cardiovascular disease, hypertension, income, low green leafy vegetable consumption, obesity, physical inactivity, regular meat consumption, regular sugar consumption, smoking, stress and use of tobacco. From these, we also selected the 11 topmost risk factors using Population-Attributable Risk ranking. Eleven ML models were built and empirically investigated using the 16 and the 11 risk factors.</div></div><div><h3>Results</h3><div>Our results showed that the 16 features-based classification (maximum AUC of 82.32%) had a slightly better performance than the 11 feature-based (maximum AUC 81.17%) algorithm. The result also showed that Artificial Neural Network (ANN) had the best performance amongst eleven algorithms investigated with AUC of 82.32%, sensitivity of 71.23%, specificity of 80.00%.</div></div><div><h3>Conclusion</h3><div>Machine Learning algorithms predicted stroke occurrence employing major risk factors in Sub-Saharan Africa better than regression models. Machine Learning, especially Artificial Neural Network, is recommended to enhance Afrocentric stroke prediction models for stroke risk factor quantification and control in Africa.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"5 3","pages":"Article 100216"},"PeriodicalIF":0.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144322809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-07-10DOI: 10.1016/j.neuri.2025.100218
Bhoomi Gupta , Ganesh Kanna Jegannathan , Mohammad Shabbir Alam , Kottala Sri Yogi , Janjhyam Venkata Naga Ramesh , Vemula Jasmine Sowmya , Isa Bayhan
Conventional single-modal approaches for auxiliary diagnosis of Alzheimer's disease (AD) face several limitations, including insufficient availability of expertly annotated imaging datasets, unstable feature extraction, and high computational demands. To address these challenges, we propose Light-Mo-DAD, a lightweight multimodal diagnostic neural network designed to integrate MRI, PET imaging, and neuropsychological assessment scores for enhanced AD detection. In the neuroimaging feature extraction module, redundancy-reduced convolutional operations are employed to capture fine-grained local features, while a global filtering mechanism enables the extraction of holistic spatial patterns. Multimodal feature fusion is achieved through spatial image registration and summation, allowing for effective integration of structural and functional imaging modalities. The neurocognitive feature extraction module utilizes depthwise separable convolutions to process cognitive assessment data, which are then fused with multimodal imaging features. To further enhance the model's discriminative capacity, transfer learning techniques are applied. A multilayer perceptron (MLP) classifier is incorporated to capture complex feature interactions and improve diagnostic precision. Evaluation on the ADNI dataset demonstrates that Light-Mo-DAD achieves 98.0% accuracy, 98.5% sensitivity, and 97.5% specificity, highlighting its robustness in early AD detection. These results suggest that the proposed architecture not only enhances diagnostic accuracy but also offers strong potential for real-time, mobile deployment in clinical settings, supporting neurologists in efficient and reliable Alzheimer's diagnosis.
{"title":"Multimodal lightweight neural network for Alzheimer's disease diagnosis integrating neuroimaging and cognitive scores","authors":"Bhoomi Gupta , Ganesh Kanna Jegannathan , Mohammad Shabbir Alam , Kottala Sri Yogi , Janjhyam Venkata Naga Ramesh , Vemula Jasmine Sowmya , Isa Bayhan","doi":"10.1016/j.neuri.2025.100218","DOIUrl":"10.1016/j.neuri.2025.100218","url":null,"abstract":"<div><div>Conventional single-modal approaches for auxiliary diagnosis of Alzheimer's disease (AD) face several limitations, including insufficient availability of expertly annotated imaging datasets, unstable feature extraction, and high computational demands. To address these challenges, we propose Light-Mo-DAD, a lightweight multimodal diagnostic neural network designed to integrate MRI, PET imaging, and neuropsychological assessment scores for enhanced AD detection. In the neuroimaging feature extraction module, redundancy-reduced convolutional operations are employed to capture fine-grained local features, while a global filtering mechanism enables the extraction of holistic spatial patterns. Multimodal feature fusion is achieved through spatial image registration and summation, allowing for effective integration of structural and functional imaging modalities. The neurocognitive feature extraction module utilizes depthwise separable convolutions to process cognitive assessment data, which are then fused with multimodal imaging features. To further enhance the model's discriminative capacity, transfer learning techniques are applied. A multilayer perceptron (MLP) classifier is incorporated to capture complex feature interactions and improve diagnostic precision. Evaluation on the ADNI dataset demonstrates that Light-Mo-DAD achieves 98.0% accuracy, 98.5% sensitivity, and 97.5% specificity, highlighting its robustness in early AD detection. These results suggest that the proposed architecture not only enhances diagnostic accuracy but also offers strong potential for real-time, mobile deployment in clinical settings, supporting neurologists in efficient and reliable Alzheimer's diagnosis.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"5 3","pages":"Article 100218"},"PeriodicalIF":0.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144634328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-18DOI: 10.1016/j.neuri.2025.100214
Mohammed E. Seno , Niladri Maiti , Maulik Patel , Mihirkumar M. Patel , Kalpesh B. Chaudhary , Ashish Pasaya , Babacar Toure
To address the limitations of traditional unimodal brain-computer interface BCI) technologies based on electroencephalography (EEG) such as low spatial resolution and high susceptibility to noise an increasing number of neuroscience-driven studies have begun to focus on BCI systems that fuse EEG signals with functional near-infrared spectroscopy (fNIRS) signals. However, integrating these two heterogeneous neurophysiological signals presents significant challenges. In this work, we propose an innovative end-to-end signal fusion method based on deep learning and evidence theory for motor imagery (MI) classification within the neuroscience domain. For EEG signals, spatiotemporal features are extracted using dual-scale temporal convolution and depthwise separable convolution, and a hybrid attention module is introduced to enhance the network's sensitivity to salient neural patterns. For fNIRS signals, spatial convolution across all channels is employed to explore activation differences among brain regions, and parallel temporal convolution combined with a gated recurrent unit (GRU) captures richer temporal dynamics of the hemodynamic response. At the decision fusion stage, decision outputs from both modalities are first quantified using Dirichlet distribution parameter estimation to model uncertainty, followed by a two-layer reasoning process using Dempster-Shafer Theory (DST) to fuse evidence from basic belief assignment (BBA) methods and both modalities. Experimental evaluation on the publicly available TU-Berlin-A dataset demonstrates the effectiveness of the proposed model, achieving an average accuracy of 83.26%, representing a 3.78% improvement over state-of-the-art methods. These results provide new insights and methodologies for neuroscience-inspired multimodal BCI systems integrating EEG and fNIRS signals.
{"title":"EEG–fNIRS signal integration for motor imagery classification using deep learning and evidence theory","authors":"Mohammed E. Seno , Niladri Maiti , Maulik Patel , Mihirkumar M. Patel , Kalpesh B. Chaudhary , Ashish Pasaya , Babacar Toure","doi":"10.1016/j.neuri.2025.100214","DOIUrl":"10.1016/j.neuri.2025.100214","url":null,"abstract":"<div><div>To address the limitations of traditional unimodal brain-computer interface BCI) technologies based on electroencephalography (EEG) such as low spatial resolution and high susceptibility to noise an increasing number of neuroscience-driven studies have begun to focus on BCI systems that fuse EEG signals with functional near-infrared spectroscopy (fNIRS) signals. However, integrating these two heterogeneous neurophysiological signals presents significant challenges. In this work, we propose an innovative end-to-end signal fusion method based on deep learning and evidence theory for motor imagery (MI) classification within the neuroscience domain. For EEG signals, spatiotemporal features are extracted using dual-scale temporal convolution and depthwise separable convolution, and a hybrid attention module is introduced to enhance the network's sensitivity to salient neural patterns. For fNIRS signals, spatial convolution across all channels is employed to explore activation differences among brain regions, and parallel temporal convolution combined with a gated recurrent unit (GRU) captures richer temporal dynamics of the hemodynamic response. At the decision fusion stage, decision outputs from both modalities are first quantified using Dirichlet distribution parameter estimation to model uncertainty, followed by a two-layer reasoning process using Dempster-Shafer Theory (DST) to fuse evidence from basic belief assignment (BBA) methods and both modalities. Experimental evaluation on the publicly available TU-Berlin-A dataset demonstrates the effectiveness of the proposed model, achieving an average accuracy of 83.26%, representing a 3.78% improvement over state-of-the-art methods. These results provide new insights and methodologies for neuroscience-inspired multimodal BCI systems integrating EEG and fNIRS signals.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"5 3","pages":"Article 100214"},"PeriodicalIF":0.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144470992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Selective auditory attention the brain's ability to focus on a specific speaker in multi-talker environments is often compromised in individuals with auditory or neurological disorders. While Auditory Attention Decoding (AAD) using EEG has shown promise in detecting attentional focus, existing models primarily utilize temporal or spectral features, often neglecting the synergistic relationships across time, space, and frequency. This limitation significantly reduces decoding accuracy, particularly in short decision windows, which are crucial for real-time applications like neuro-steered hearing aids. This study is to enhance short-window AAD performance by fully leveraging multi-dimensional EEG characteristics.
Methods
To address this, we propose TSF-AADNet, a novel neural framework that integrates temporal–spatial and frequency–spatial features using dual-branch architectures and advanced attention-based fusion.
Results
Tested on KULeuven and DTU datasets, TSF-AADNet achieves 91.8% and 81.1% accuracy at 0.1-second windows—outperforming the state-of-the-art by up to 7.99%.
Conclusions
These results demonstrate the model's potential in enabling precise, real-time attention tracking for hearing impairment diagnostics and next-generation neuroadaptive auditory prosthetics.
{"title":"Short-window EEG-based auditory attention decoding for neuroadaptive hearing support for smart healthcare","authors":"Ihtiram Raza Khan , Sheng-Lung Peng , Rupali Mahajan , Rajesh Dey","doi":"10.1016/j.neuri.2025.100222","DOIUrl":"10.1016/j.neuri.2025.100222","url":null,"abstract":"<div><h3>Background</h3><div>Selective auditory attention the brain's ability to focus on a specific speaker in multi-talker environments is often compromised in individuals with auditory or neurological disorders. While Auditory Attention Decoding (AAD) using EEG has shown promise in detecting attentional focus, existing models primarily utilize temporal or spectral features, often neglecting the synergistic relationships across time, space, and frequency. This limitation significantly reduces decoding accuracy, particularly in short decision windows, which are crucial for real-time applications like neuro-steered hearing aids. This study is to enhance short-window AAD performance by fully leveraging multi-dimensional EEG characteristics.</div></div><div><h3>Methods</h3><div>To address this, we propose TSF-AADNet, a novel neural framework that integrates temporal–spatial and frequency–spatial features using dual-branch architectures and advanced attention-based fusion.</div></div><div><h3>Results</h3><div>Tested on KULeuven and DTU datasets, TSF-AADNet achieves 91.8% and 81.1% accuracy at 0.1-second windows—outperforming the state-of-the-art by up to 7.99%.</div></div><div><h3>Conclusions</h3><div>These results demonstrate the model's potential in enabling precise, real-time attention tracking for hearing impairment diagnostics and next-generation neuroadaptive auditory prosthetics.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"5 3","pages":"Article 100222"},"PeriodicalIF":0.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144696611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-01Epub Date: 2025-03-12DOI: 10.1016/j.neuri.2025.100195
Jie Li , Gary Green , Sarah J.A. Carr , Peng Liu , Jian Zhang
Abnormality detection in identifying a single-subject which deviates from the majority of a control group dataset is a fundamental problem. Typically, the control group is characterised using standard Normal statistics, and the detection of a single abnormal subject is in that context. However, in many situations, the control group cannot be described by Normal statistics, making standard statistical methods inappropriate. This paper presents a Bayesian Inference General Procedures for A Single-subject Test (BIGPAST) designed to mitigate the effects of skewness under the assumption that the dataset of the control group comes from the skewed Student t distribution. BIGPAST operates under the null hypothesis that the single-subject follows the same distribution as the control group. We assess BIGPAST's performance against other methods through simulation studies. The results demonstrate that BIGPAST is robust against deviations from normality and outperforms the existing approaches in accuracy, nearest to the nominal accuracy 0.95. BIGPAST can reduce model misspecification errors under the skewed Student t assumption by up to 12 times, as demonstrated in Section 3.3. We apply BIGPAST to a Magnetoencephalography (MEG) dataset consisting of an individual with mild traumatic brain injury and an age and gender-matched control group. For example, the previous method failed to detect abnormalities in 8 brain areas, whereas BIGPAST successfully identified them, demonstrating its effectiveness in detecting abnormalities in a single-subject.
{"title":"Bayesian Inference General Procedures for A Single-subject Test study","authors":"Jie Li , Gary Green , Sarah J.A. Carr , Peng Liu , Jian Zhang","doi":"10.1016/j.neuri.2025.100195","DOIUrl":"10.1016/j.neuri.2025.100195","url":null,"abstract":"<div><div>Abnormality detection in identifying a single-subject which deviates from the majority of a control group dataset is a fundamental problem. Typically, the control group is characterised using standard Normal statistics, and the detection of a single abnormal subject is in that context. However, in many situations, the control group cannot be described by Normal statistics, making standard statistical methods inappropriate. This paper presents a Bayesian Inference General Procedures for A Single-subject Test (BIGPAST) designed to mitigate the effects of skewness under the assumption that the dataset of the control group comes from the skewed Student <em>t</em> distribution. BIGPAST operates under the null hypothesis that the single-subject follows the same distribution as the control group. We assess BIGPAST's performance against other methods through simulation studies. The results demonstrate that BIGPAST is robust against deviations from normality and outperforms the existing approaches in accuracy, nearest to the nominal accuracy 0.95. BIGPAST can reduce model misspecification errors under the skewed Student <em>t</em> assumption by up to 12 times, as demonstrated in Section <span><span>3.3</span></span>. We apply BIGPAST to a Magnetoencephalography (MEG) dataset consisting of an individual with mild traumatic brain injury and an age and gender-matched control group. For example, the previous method failed to detect abnormalities in 8 brain areas, whereas BIGPAST successfully identified them, demonstrating its effectiveness in detecting abnormalities in a single-subject.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"5 2","pages":"Article 100195"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143643547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-invasive brain stimulation (NIBS) techniques, such as transcranial infrared (tNIR) stimulation, offer promising advancements in sleep monitoring and regulation. To enhance sleep stage classification without relying on traditional polysomnography (PSG) systems, we propose a novel approach integrating single-channel electrocardiogram (ECG) signals, heart rate variability (HRV) features, and tNIR stimulation. The maximal overlap discrete wavelet transform (MODWT) is applied for multi-resolution analysis of ECG signals, followed by peak information extraction. Based on the first-order deviation of peak positions, multi-dimensional HRV features are extracted. To identify HRV features strongly associated with different sleep stages, we introduce a feature selection method combining the ReliefF algorithm and Gini index. The selected features are then processed using the INFO-ABC Logit Boost method to establish correlations between HRV dynamics and sleep stages. Experimental results on publicly available datasets demonstrate that the proposed model achieves an overall accuracy of 83.67%, a precision of 82.59%, a Kappa coefficient of 77.94%, and an F1-score of 82.97%. Compared with conventional sleep staging methods, our approach enhances sleep quality assessment and facilitates real-time, non-invasive monitoring in home and mobile healthcare settings, leveraging the potential of tNIR-based NIBS for sleep modulation.
{"title":"Non-invasive brain stimulation-based sleep stage classification using transcranial infrared based electrocardiogram","authors":"Janjhyam Venkata Naga Ramesh , Aadam Quraishi , Yassine Aoudni , Mustafa Mudhafar , Divya Nimma , Monika Bansal","doi":"10.1016/j.neuri.2025.100197","DOIUrl":"10.1016/j.neuri.2025.100197","url":null,"abstract":"<div><div>Non-invasive brain stimulation (NIBS) techniques, such as transcranial infrared (tNIR) stimulation, offer promising advancements in sleep monitoring and regulation. To enhance sleep stage classification without relying on traditional polysomnography (PSG) systems, we propose a novel approach integrating single-channel electrocardiogram (ECG) signals, heart rate variability (HRV) features, and tNIR stimulation. The maximal overlap discrete wavelet transform (MODWT) is applied for multi-resolution analysis of ECG signals, followed by peak information extraction. Based on the first-order deviation of peak positions, multi-dimensional HRV features are extracted. To identify HRV features strongly associated with different sleep stages, we introduce a feature selection method combining the ReliefF algorithm and Gini index. The selected features are then processed using the INFO-ABC Logit Boost method to establish correlations between HRV dynamics and sleep stages. Experimental results on publicly available datasets demonstrate that the proposed model achieves an overall accuracy of 83.67%, a precision of 82.59%, a Kappa coefficient of 77.94%, and an F1-score of 82.97%. Compared with conventional sleep staging methods, our approach enhances sleep quality assessment and facilitates real-time, non-invasive monitoring in home and mobile healthcare settings, leveraging the potential of tNIR-based NIBS for sleep modulation.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"5 2","pages":"Article 100197"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143681918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The widespread adoption of the Internet has transformed various industries, driving significant systemic reforms across different sectors. This transformation has enhanced the Internet's role in information dissemination, resource sharing, and global connectivity, allowing for more efficient distribution of knowledge and services. The development of the Internet model and its research bring significant benefits from the network, enabling people to use and learn from it. However, the traditional education model provides only limited knowledge, restricting growth and progress. Moreover, there is a vast world of knowledge yet to be explored. Nowadays, with the help of network tools, people can understand the dynamics of the whole world and accept the culture and knowledge of different regions without going out. Throughout the study of English legacy problems in various countries, efficient learning methods and high levels of English skills are the goals pursued, while the traditional English model can't meet the students' learning needs in a short time. The model construction of data mining algorithm based on large open network courses is a model for solving legacy problems adopted both domestically and internationally. According to the survey data of universities in various countries, the use of data mining algorithm can fundamentally meet the student's desire and demand for English knowledge. This research, integrates the mining algorithm into English research, which will essentially improve the English legacy problems.
{"title":"Integration of software-based cognitive approaches and brain-like computer machinery for efficient cognitive computing","authors":"Chitrakant Banchhor , Manoj Kumar Rawat , Rahul Joshi , Dharmesh Dhabliya , Omkaresh Kulkarni , Sandeep Dwarkanath Pande , Umesh Pawar","doi":"10.1016/j.neuri.2025.100194","DOIUrl":"10.1016/j.neuri.2025.100194","url":null,"abstract":"<div><div>The widespread adoption of the Internet has transformed various industries, driving significant systemic reforms across different sectors. This transformation has enhanced the Internet's role in information dissemination, resource sharing, and global connectivity, allowing for more efficient distribution of knowledge and services. The development of the Internet model and its research bring significant benefits from the network, enabling people to use and learn from it. However, the traditional education model provides only limited knowledge, restricting growth and progress. Moreover, there is a vast world of knowledge yet to be explored. Nowadays, with the help of network tools, people can understand the dynamics of the whole world and accept the culture and knowledge of different regions without going out. Throughout the study of English legacy problems in various countries, efficient learning methods and high levels of English skills are the goals pursued, while the traditional English model can't meet the students' learning needs in a short time. The model construction of data mining algorithm based on large open network courses is a model for solving legacy problems adopted both domestically and internationally. According to the survey data of universities in various countries, the use of data mining algorithm can fundamentally meet the student's desire and demand for English knowledge. This research, integrates the mining algorithm into English research, which will essentially improve the English legacy problems.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"5 2","pages":"Article 100194"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143643546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Asphyxia, a critical respiratory condition, poses significant risks to newborns and can lead to catastrophic outcomes. Early detection of asphyxia is crucial for reducing infant mortality rates. Traditional medical diagnosis methods can be time-consuming, whereas early detection through artificial intelligence (AI) can expedite the process and improve survival rates. Despite the importance of early asphyxia detection, existing methods are often delayed and not always effective. This research addresses the need for a faster, more accurate approach to detecting infant asphyxia using machine learning (ML) and deep learning (DL) techniques. This study aims to develop a robust AI-driven system to detect asphyxia in newborns using ML and DL models, focusing on improving accuracy and efficiency over traditional diagnostic methods. This study explores feature extraction using Mel-Frequency Cepstral Coefficients (MFCCs), where the features are categorized into time and frequency domains. Data preprocessing techniques, such as noise removal, handling missing values, outliers, and label encoding, are applied to ensure clean data. To address class imbalance, the Random Oversampling (ROS) technique is employed. Hyperparameter optimization is performed using GridSearchCV for various machine-learning models. Deep learning models, including custom artificial neural networks (ANN1) and convolutional neural networks (CNN1, CNN2), are introduced with hidden layers for improved performance. The performance of different ML and DL models is evaluated, with Logistic Regression (LR) achieving an accuracy of 99.16% and a 0.008% error rate. In comparison, ANN1 outperforms other DL models with an accuracy of 98.20% and a 0.018% error rate. The results demonstrate that both ML and DL techniques can significantly enhance early asphyxia detection in newborns. The Logistic Regression model offers the highest accuracy in machine learning, while ANN1 performs optimally in deep learning, suggesting their potential for deployment in clinical settings to improve neonatal care.
{"title":"Analyzing infant cry to detect birth asphyxia using a hybrid CNN and feature extraction approach","authors":"Samrat Kumar Dey , Khandaker Mohammad Mohi Uddin , Arpita Howlader , Md. Mahbubur Rahman , Hafiz Md. Hasan Babu , Nitish Biswas , Umme Raihan Siddiqi , Badhan Mazumder","doi":"10.1016/j.neuri.2025.100193","DOIUrl":"10.1016/j.neuri.2025.100193","url":null,"abstract":"<div><div>Asphyxia, a critical respiratory condition, poses significant risks to newborns and can lead to catastrophic outcomes. Early detection of asphyxia is crucial for reducing infant mortality rates. Traditional medical diagnosis methods can be time-consuming, whereas early detection through artificial intelligence (AI) can expedite the process and improve survival rates. Despite the importance of early asphyxia detection, existing methods are often delayed and not always effective. This research addresses the need for a faster, more accurate approach to detecting infant asphyxia using machine learning (ML) and deep learning (DL) techniques. This study aims to develop a robust AI-driven system to detect asphyxia in newborns using ML and DL models, focusing on improving accuracy and efficiency over traditional diagnostic methods. This study explores feature extraction using Mel-Frequency Cepstral Coefficients (MFCCs), where the features are categorized into time and frequency domains. Data preprocessing techniques, such as noise removal, handling missing values, outliers, and label encoding, are applied to ensure clean data. To address class imbalance, the Random Oversampling (ROS) technique is employed. Hyperparameter optimization is performed using GridSearchCV for various machine-learning models. Deep learning models, including custom artificial neural networks (ANN1) and convolutional neural networks (CNN1, CNN2), are introduced with hidden layers for improved performance. The performance of different ML and DL models is evaluated, with Logistic Regression (LR) achieving an accuracy of 99.16% and a 0.008% error rate. In comparison, ANN1 outperforms other DL models with an accuracy of 98.20% and a 0.018% error rate. The results demonstrate that both ML and DL techniques can significantly enhance early asphyxia detection in newborns. The Logistic Regression model offers the highest accuracy in machine learning, while ANN1 performs optimally in deep learning, suggesting their potential for deployment in clinical settings to improve neonatal care.</div></div>","PeriodicalId":74295,"journal":{"name":"Neuroscience informatics","volume":"5 2","pages":"Article 100193"},"PeriodicalIF":0.0,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143478679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}