Pub Date : 2023-12-01Epub Date: 2023-10-12DOI: 10.1142/S0129065723500624
Francisco Laport, Adriana Dapena, Paula M Castro, Daniel I Iglesias, Francisco J Vazquez-Araujo
Brain-computer interfaces (BCIs) establish a direct communication channel between the human brain and external devices. Among various methods, electroencephalography (EEG) stands out as the most popular choice for BCI design due to its non-invasiveness, ease of use, and cost-effectiveness. This paper aims to present and compare the accuracy and robustness of an EEG system employing one or two channels. We present both hardware and algorithms for the detection of open and closed eyes. Firstly, we utilize a low-cost hardware device to capture EEG activity from one or two channels. Next, we apply the discrete Fourier transform to analyze the signals in the frequency domain, extracting features from each channel. For classification, we test various well-known techniques, including Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Decision Tree (DT), or Logistic Regression (LR). To evaluate the system, we conduct experiments, acquiring signals associated with open and closed eyes, and compare the performance between one and two channels. The results demonstrate that employing a system with two channels and using SVM, DT, or LR classifiers enhances robustness compared to a single-channel setup and allows us to achieve an accuracy percentage greater than 95% for both eye states.
{"title":"Eye State Detection Using Frequency Features from 1 or 2-Channel EEG.","authors":"Francisco Laport, Adriana Dapena, Paula M Castro, Daniel I Iglesias, Francisco J Vazquez-Araujo","doi":"10.1142/S0129065723500624","DOIUrl":"10.1142/S0129065723500624","url":null,"abstract":"<p><p>Brain-computer interfaces (BCIs) establish a direct communication channel between the human brain and external devices. Among various methods, electroencephalography (EEG) stands out as the most popular choice for BCI design due to its non-invasiveness, ease of use, and cost-effectiveness. This paper aims to present and compare the accuracy and robustness of an EEG system employing one or two channels. We present both hardware and algorithms for the detection of open and closed eyes. Firstly, we utilize a low-cost hardware device to capture EEG activity from one or two channels. Next, we apply the discrete Fourier transform to analyze the signals in the frequency domain, extracting features from each channel. For classification, we test various well-known techniques, including Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Decision Tree (DT), or Logistic Regression (LR). To evaluate the system, we conduct experiments, acquiring signals associated with open and closed eyes, and compare the performance between one and two channels. The results demonstrate that employing a system with two channels and using SVM, DT, or LR classifiers enhances robustness compared to a single-channel setup and allows us to achieve an accuracy percentage greater than 95% for both eye states.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2350062"},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41224086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01Epub Date: 2023-10-20DOI: 10.1142/S012906572350065X
Enrique Adrian Villarrubia-Martin, Luis Rodriguez-Benitez, Luis Jimenez-Linares, David Muñoz-Valero, Jun Liu
Reinforcement learning (RL) is a powerful technique that allows agents to learn optimal decision-making policies through interactions with an environment. However, traditional RL algorithms suffer from several limitations such as the need for large amounts of data and long-term credit assignment, i.e. the problem of determining which actions actually produce a certain reward. Recently, Transformers have shown their capacity to address these constraints in this area of learning in an offline setting. This paper proposes a framework that uses Transformers to enhance the training of online off-policy RL agents and address the challenges described above through self-attention. The proposal introduces a hybrid agent with a mixed policy that combines an online off-policy agent with an offline Transformer agent using the Decision Transformer architecture. By sequentially exchanging the experience replay buffer between the agents, the agent's learning training efficiency is improved in the first iterations and so is the training of Transformer-based RL agents in situations with limited data availability or unknown environments.
{"title":"A Hybrid Online Off-Policy Reinforcement Learning Agent Framework Supported by Transformers.","authors":"Enrique Adrian Villarrubia-Martin, Luis Rodriguez-Benitez, Luis Jimenez-Linares, David Muñoz-Valero, Jun Liu","doi":"10.1142/S012906572350065X","DOIUrl":"10.1142/S012906572350065X","url":null,"abstract":"<p><p>Reinforcement learning (RL) is a powerful technique that allows agents to learn optimal decision-making policies through interactions with an environment. However, traditional RL algorithms suffer from several limitations such as the need for large amounts of data and long-term credit assignment, i.e. the problem of determining which actions actually produce a certain reward. Recently, Transformers have shown their capacity to address these constraints in this area of learning in an offline setting. This paper proposes a framework that uses Transformers to enhance the training of online off-policy RL agents and address the challenges described above through self-attention. The proposal introduces a hybrid agent with a mixed policy that combines an online off-policy agent with an offline Transformer agent using the Decision Transformer architecture. By sequentially exchanging the experience replay buffer between the agents, the agent's learning training efficiency is improved in the first iterations and so is the training of Transformer-based RL agents in situations with limited data availability or unknown environments.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2350065"},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49686651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01Epub Date: 2023-10-13DOI: 10.1142/S0129065723820014
{"title":"Announcement: The 2023 Hojjat Adeli Award for Outstanding Contributions in Neural Systems.","authors":"","doi":"10.1142/S0129065723820014","DOIUrl":"10.1142/S0129065723820014","url":null,"abstract":"","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2382001"},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41224084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01Epub Date: 2023-09-23DOI: 10.1142/S0129065723500600
Junjie Hu, Chengrong Yu, Zhang Yi, Haixian Zhang
Deep neural networks (DNNs) have emerged as a prominent model in medical image segmentation, achieving remarkable advancements in clinical practice. Despite the promising results reported in the literature, the effectiveness of DNNs necessitates substantial quantities of high-quality annotated training data. During experiments, we observe a significant decline in the performance of DNNs on the test set when there exists disruption in the labels of the training dataset, revealing inherent limitations in the robustness of DNNs. In this paper, we find that the neural memory ordinary differential equation (nmODE), a recently proposed model based on ordinary differential equations (ODEs), not only addresses the robustness limitation but also enhances performance when trained by the clean training dataset. However, it is acknowledged that the ODE-based model tends to be less computationally efficient compared to the conventional discrete models due to the multiple function evaluations required by the ODE solver. Recognizing the efficiency limitation of the ODE-based model, we propose a novel approach called the nmODE-based knowledge distillation (nmODE-KD). The proposed method aims to transfer knowledge from the continuous nmODE to a discrete layer, simultaneously enhancing the model's robustness and efficiency. The core concept of nmODE-KD revolves around enforcing the discrete layer to mimic the continuous nmODE by minimizing the KL divergence between them. Experimental results on 18 organs-at-risk segmentation tasks demonstrate that nmODE-KD exhibits improved robustness compared to ODE-based models while also mitigating the efficiency limitation.
{"title":"Enhancing Robustness of Medical Image Segmentation Model with Neural Memory Ordinary Differential Equation.","authors":"Junjie Hu, Chengrong Yu, Zhang Yi, Haixian Zhang","doi":"10.1142/S0129065723500600","DOIUrl":"10.1142/S0129065723500600","url":null,"abstract":"<p><p>Deep neural networks (DNNs) have emerged as a prominent model in medical image segmentation, achieving remarkable advancements in clinical practice. Despite the promising results reported in the literature, the effectiveness of DNNs necessitates substantial quantities of high-quality annotated training data. During experiments, we observe a significant decline in the performance of DNNs on the test set when there exists disruption in the labels of the training dataset, revealing inherent limitations in the robustness of DNNs. In this paper, we find that the neural memory ordinary differential equation (nmODE), a recently proposed model based on ordinary differential equations (ODEs), not only addresses the robustness limitation but also enhances performance when trained by the clean training dataset. However, it is acknowledged that the ODE-based model tends to be less computationally efficient compared to the conventional discrete models due to the multiple function evaluations required by the ODE solver. Recognizing the efficiency limitation of the ODE-based model, we propose a novel approach called the nmODE-based knowledge distillation (nmODE-KD). The proposed method aims to transfer knowledge from the continuous nmODE to a discrete layer, simultaneously enhancing the model's robustness and efficiency. The core concept of nmODE-KD revolves around enforcing the discrete layer to mimic the continuous nmODE by minimizing the KL divergence between them. Experimental results on 18 organs-at-risk segmentation tasks demonstrate that nmODE-KD exhibits improved robustness compared to ODE-based models while also mitigating the efficiency limitation.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2350060"},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41180733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01Epub Date: 2023-10-13DOI: 10.1142/S0129065723500648
Mosab A A Yousif, Mahmut Ozturk
ConceFT (concentration of frequency and time) is a new time-frequency (TF) analysis method which combines multitaper technique and synchrosqueezing transform (SST). This combination produces highly concentrated TF representations with approximately perfect time and frequency resolutions. In this paper, it is aimed to show the TF representation performance and robustness of ConceFT by using it for the classification of the epileptic electroencephalography (EEG) signals. Therefore, a signal classification algorithm which uses TF images obtained with ConceFT to feed the transfer learning structure has been presented. Epilepsy is a common neurological disorder that millions of people suffer worldwide. Daily lives of the patients are quite difficult because of the unpredictable time of seizures. EEG signals monitoring the electrical activity of the brain can be used to detect approaching seizures and make possible to warn the patient before the attack. GoogLeNet which is a well-known deep learning model has been preferred to classify TF images. Classification performance is directly related to the TF representation accuracy of the ConceFT. The proposed method has been tested for various classification scenarios and obtained accuracies between 95.83% and 99.58% for two and three-class classification scenarios. High results show that ConceFT is a successful and promising TF analysis method for non-stationary biomedical signals.
{"title":"Deep Learning-Based Classification of Epileptic Electroencephalography Signals Using a Concentrated Time-Frequency Approach.","authors":"Mosab A A Yousif, Mahmut Ozturk","doi":"10.1142/S0129065723500648","DOIUrl":"10.1142/S0129065723500648","url":null,"abstract":"<p><p>ConceFT (concentration of frequency and time) is a new time-frequency (TF) analysis method which combines multitaper technique and synchrosqueezing transform (SST). This combination produces highly concentrated TF representations with approximately perfect time and frequency resolutions. In this paper, it is aimed to show the TF representation performance and robustness of ConceFT by using it for the classification of the epileptic electroencephalography (EEG) signals. Therefore, a signal classification algorithm which uses TF images obtained with ConceFT to feed the transfer learning structure has been presented. Epilepsy is a common neurological disorder that millions of people suffer worldwide. Daily lives of the patients are quite difficult because of the unpredictable time of seizures. EEG signals monitoring the electrical activity of the brain can be used to detect approaching seizures and make possible to warn the patient before the attack. GoogLeNet which is a well-known deep learning model has been preferred to classify TF images. Classification performance is directly related to the TF representation accuracy of the ConceFT. The proposed method has been tested for various classification scenarios and obtained accuracies between 95.83% and 99.58% for two and three-class classification scenarios. High results show that ConceFT is a successful and promising TF analysis method for non-stationary biomedical signals.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2350064"},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41224085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-07DOI: 10.1142/s0129065724500059
Federica Colonnese, Francesco Di Luzio, Antonello Rosato, Massimo Panella
{"title":"Bimodal Feature Analysis with Deep Learning for Autism Spectrum Disorder Detection","authors":"Federica Colonnese, Francesco Di Luzio, Antonello Rosato, Massimo Panella","doi":"10.1142/s0129065724500059","DOIUrl":"https://doi.org/10.1142/s0129065724500059","url":null,"abstract":"","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"71 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135545030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-07DOI: 10.1142/s0129065723500697
Claudia Greco, Gennaro Raimo, Terry Amorese, Marialucia Cuciniello, Gavin Mcconvey, Gennaro Cordasco, Marcos Faundez-Zanuy, Alessandro Vinciarelli, Zoraida Callejas-Carrion, Anna Esposito
This study contributes knowledge on the detection of depression through handwriting/drawing features, to identify quantitative and noninvasive indicators of the disorder for implementing algorithms for its automatic detection. For this purpose, an original online approach was adopted to provide a dynamic evaluation of handwriting/drawing performance of healthy participants with no history of any psychiatric disorders ([Formula: see text]), and patients with a clinical diagnosis of depression ([Formula: see text]). Both groups were asked to complete seven tasks requiring either the writing or drawing on a paper while five handwriting/drawing features' categories (i.e. pressure on the paper, time, ductus, space among characters, and pen inclination) were recorded by using a digitalized tablet. The collected records were statistically analyzed. Results showed that, except for pressure, all the considered features, successfully discriminate between depressed and nondepressed subjects. In addition, it was observed that depression affects different writing/drawing functionalities. These findings suggest the adoption of writing/drawing tasks in the clinical practice as tools to support the current depression detection methods. This would have important repercussions on reducing the diagnostic times and treatment formulation.
{"title":"Discriminative Power of Handwriting and Drawing Features in Depression","authors":"Claudia Greco, Gennaro Raimo, Terry Amorese, Marialucia Cuciniello, Gavin Mcconvey, Gennaro Cordasco, Marcos Faundez-Zanuy, Alessandro Vinciarelli, Zoraida Callejas-Carrion, Anna Esposito","doi":"10.1142/s0129065723500697","DOIUrl":"https://doi.org/10.1142/s0129065723500697","url":null,"abstract":"This study contributes knowledge on the detection of depression through handwriting/drawing features, to identify quantitative and noninvasive indicators of the disorder for implementing algorithms for its automatic detection. For this purpose, an original online approach was adopted to provide a dynamic evaluation of handwriting/drawing performance of healthy participants with no history of any psychiatric disorders ([Formula: see text]), and patients with a clinical diagnosis of depression ([Formula: see text]). Both groups were asked to complete seven tasks requiring either the writing or drawing on a paper while five handwriting/drawing features' categories (i.e. pressure on the paper, time, ductus, space among characters, and pen inclination) were recorded by using a digitalized tablet. The collected records were statistically analyzed. Results showed that, except for pressure, all the considered features, successfully discriminate between depressed and nondepressed subjects. In addition, it was observed that depression affects different writing/drawing functionalities. These findings suggest the adoption of writing/drawing tasks in the clinical practice as tools to support the current depression detection methods. This would have important repercussions on reducing the diagnostic times and treatment formulation.","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"71 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135545032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-03DOI: 10.1142/s0129065723500685
Nadia Mammone, Cosimo Ieracitano, Rossella Spataro, Christoph Guger, Woosang Cho, Francesco Carlo Morabito
{"title":"A few-shot transfer learning approach for motion intention decoding from electroencephalographic signals","authors":"Nadia Mammone, Cosimo Ieracitano, Rossella Spataro, Christoph Guger, Woosang Cho, Francesco Carlo Morabito","doi":"10.1142/s0129065723500685","DOIUrl":"https://doi.org/10.1142/s0129065723500685","url":null,"abstract":"","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"33 19","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135873431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2023-09-30DOI: 10.1142/S0129065723500582
Diego Teran-Pineda, Karl Thurnhofer-Hemsi, Enrique Domínguez
Human activity recognition is an application of machine learning with the aim of identifying activities from the gathered activity raw data acquired by different sensors. In medicine, human gait is commonly analyzed by doctors to detect abnormalities and determine possible treatments for the patient. Monitoring the patient's activity is paramount in evaluating the treatment's evolution. This type of classification is still not enough precise, which may lead to unfavorable reactions and responses. A novel methodology that reduces the complexity of extracting features from multimodal sensors is proposed to improve human activity classification based on accelerometer data. A sliding window technique is used to demarcate the first dominant spectral amplitude, decreasing dimensionality and improving feature extraction. In this work, we compared several state-of-art machine learning classifiers evaluated on the HuGaDB dataset and validated on our dataset. Several configurations to reduce features and training time were analyzed using multimodal sensors: all-axis spectrum, single-axis spectrum, and sensor reduction.
{"title":"Human Gait Activity Recognition Using Multimodal Sensors.","authors":"Diego Teran-Pineda, Karl Thurnhofer-Hemsi, Enrique Domínguez","doi":"10.1142/S0129065723500582","DOIUrl":"10.1142/S0129065723500582","url":null,"abstract":"<p><p>Human activity recognition is an application of machine learning with the aim of identifying activities from the gathered activity raw data acquired by different sensors. In medicine, human gait is commonly analyzed by doctors to detect abnormalities and determine possible treatments for the patient. Monitoring the patient's activity is paramount in evaluating the treatment's evolution. This type of classification is still not enough precise, which may lead to unfavorable reactions and responses. A novel methodology that reduces the complexity of extracting features from multimodal sensors is proposed to improve human activity classification based on accelerometer data. A sliding window technique is used to demarcate the first dominant spectral amplitude, decreasing dimensionality and improving feature extraction. In this work, we compared several state-of-art machine learning classifiers evaluated on the HuGaDB dataset and validated on our dataset. Several configurations to reduce features and training time were analyzed using multimodal sensors: all-axis spectrum, single-axis spectrum, and sensor reduction.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2350058"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41151065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2023-10-04DOI: 10.1142/S0129065723500594
Jhielson M Pimentel, Renan C Moioli, Mariana F P De Araujo, Patricia A Vargas
This work presents a neurorobotics model of the brain that integrates the cerebellum and the basal ganglia regions to coordinate movements in a humanoid robot. This cerebellar-basal ganglia circuitry is well known for its relevance to the motor control used by most mammals. Other computational models have been designed for similar applications in the robotics field. However, most of them completely ignore the interplay between neurons from the basal ganglia and cerebellum. Recently, neuroscientists indicated that neurons from both regions communicate not only at the level of the cerebral cortex but also at the subcortical level. In this work, we built an integrated neurorobotics model to assess the capacity of the network to predict and adjust the motion of the hands of a robot in real time. Our model was capable of performing different movements in a humanoid robot by respecting the sensorimotor loop of the robot and the biophysical features of the neuronal circuitry. The experiments were executed in simulation and the real world. We believe that our proposed neurorobotics model can be an important tool for new studies on the brain and a reference toward new robot motor controllers.
{"title":"An Integrated Neurorobotics Model of the Cerebellar-Basal Ganglia Circuitry.","authors":"Jhielson M Pimentel, Renan C Moioli, Mariana F P De Araujo, Patricia A Vargas","doi":"10.1142/S0129065723500594","DOIUrl":"10.1142/S0129065723500594","url":null,"abstract":"<p><p>This work presents a neurorobotics model of the brain that integrates the cerebellum and the basal ganglia regions to coordinate movements in a humanoid robot. This cerebellar-basal ganglia circuitry is well known for its relevance to the motor control used by most mammals. Other computational models have been designed for similar applications in the robotics field. However, most of them completely ignore the interplay between neurons from the basal ganglia and cerebellum. Recently, neuroscientists indicated that neurons from both regions communicate not only at the level of the cerebral cortex but also at the subcortical level. In this work, we built an integrated neurorobotics model to assess the capacity of the network to predict and adjust the motion of the hands of a robot in real time. Our model was capable of performing different movements in a humanoid robot by respecting the sensorimotor loop of the robot and the biophysical features of the neuronal circuitry. The experiments were executed in simulation and the real world. We believe that our proposed neurorobotics model can be an important tool for new studies on the brain and a reference toward new robot motor controllers.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2350059"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41143843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}