Pub Date : 2024-04-12DOI: 10.3389/fncom.2024.1393025
Yuanhao He, Geyang Xiao, Jun Zhu, Tao Zou, Yuan Liang
In recent years, with the rapid development of network applications and the increasing demand for high-quality network service, quality-of-service (QoS) routing has emerged as a critical network technology. The application of machine learning techniques, particularly reinforcement learning and graph neural network, has garnered significant attention in addressing this problem. However, existing reinforcement learning methods lack research on the causal impact of agent actions on the interactive environment, and graph neural network fail to effectively represent link features, which are pivotal for routing optimization. Therefore, this study quantifies the causal influence between the intelligent agent and the interactive environment based on causal inference techniques, aiming to guide the intelligent agent in improving the efficiency of exploring the action space. Simultaneously, graph neural network is employed to embed node and link features, and a reward function is designed that comprehensively considers network performance metrics and causality relevance. A centralized reinforcement learning method is proposed to effectively achieve QoS-aware routing in Software-Defined Networking (SDN). Finally, experiments are conducted in a network simulation environment, and metrics such as packet loss, delay, and throughput all outperform the baseline.
{"title":"Reinforcement learning-based SDN routing scheme empowered by causality detection and GNN","authors":"Yuanhao He, Geyang Xiao, Jun Zhu, Tao Zou, Yuan Liang","doi":"10.3389/fncom.2024.1393025","DOIUrl":"https://doi.org/10.3389/fncom.2024.1393025","url":null,"abstract":"In recent years, with the rapid development of network applications and the increasing demand for high-quality network service, quality-of-service (QoS) routing has emerged as a critical network technology. The application of machine learning techniques, particularly reinforcement learning and graph neural network, has garnered significant attention in addressing this problem. However, existing reinforcement learning methods lack research on the causal impact of agent actions on the interactive environment, and graph neural network fail to effectively represent link features, which are pivotal for routing optimization. Therefore, this study quantifies the causal influence between the intelligent agent and the interactive environment based on causal inference techniques, aiming to guide the intelligent agent in improving the efficiency of exploring the action space. Simultaneously, graph neural network is employed to embed node and link features, and a reward function is designed that comprehensively considers network performance metrics and causality relevance. A centralized reinforcement learning method is proposed to effectively achieve QoS-aware routing in Software-Defined Networking (SDN). Finally, experiments are conducted in a network simulation environment, and metrics such as packet loss, delay, and throughput all outperform the baseline.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140810083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-12DOI: 10.3389/fncom.2024.1338280
Kwangjun Lee, Shirin Dora, Jorge F. Mejias, Sander M. Bohte, Cyriel M. A. Pennartz
Predictive coding (PC) is an influential theory in neuroscience, which suggests the existence of a cortical architecture that is constantly generating and updating predictive representations of sensory inputs. Owing to its hierarchical and generative nature, PC has inspired many computational models of perception in the literature. However, the biological plausibility of existing models has not been sufficiently explored due to their use of artificial neurons that approximate neural activity with firing rates in the continuous time domain and propagate signals synchronously. Therefore, we developed a spiking neural network for predictive coding (SNN-PC), in which neurons communicate using event-driven and asynchronous spikes. Adopting the hierarchical structure and Hebbian learning algorithms from previous PC neural network models, SNN-PC introduces two novel features: (1) a fast feedforward sweep from the input to higher areas, which generates a spatially reduced and abstract representation of input (i.e., a neural code for the gist of a scene) and provides a neurobiological alternative to an arbitrary choice of priors; and (2) a separation of positive and negative error-computing neurons, which counters the biological implausibility of a bi-directional error neuron with a very high baseline firing rate. After training with the MNIST handwritten digit dataset, SNN-PC developed hierarchical internal representations and was able to reconstruct samples it had not seen during training. SNN-PC suggests biologically plausible mechanisms by which the brain may perform perceptual inference and learning in an unsupervised manner. In addition, it may be used in neuromorphic applications that can utilize its energy-efficient, event-driven, local learning, and parallel information processing nature.
{"title":"Predictive coding with spiking neurons and feedforward gist signaling","authors":"Kwangjun Lee, Shirin Dora, Jorge F. Mejias, Sander M. Bohte, Cyriel M. A. Pennartz","doi":"10.3389/fncom.2024.1338280","DOIUrl":"https://doi.org/10.3389/fncom.2024.1338280","url":null,"abstract":"Predictive coding (PC) is an influential theory in neuroscience, which suggests the existence of a cortical architecture that is constantly generating and updating predictive representations of sensory inputs. Owing to its hierarchical and generative nature, PC has inspired many computational models of perception in the literature. However, the biological plausibility of existing models has not been sufficiently explored due to their use of artificial neurons that approximate neural activity with firing rates in the continuous time domain and propagate signals synchronously. Therefore, we developed a spiking neural network for predictive coding (SNN-PC), in which neurons communicate using event-driven and asynchronous spikes. Adopting the hierarchical structure and Hebbian learning algorithms from previous PC neural network models, SNN-PC introduces two novel features: (1) a fast feedforward sweep from the input to higher areas, which generates a spatially reduced and abstract representation of input (i.e., a neural code for the gist of a scene) and provides a neurobiological alternative to an arbitrary choice of priors; and (2) a separation of positive and negative error-computing neurons, which counters the biological implausibility of a bi-directional error neuron with a very high baseline firing rate. After training with the MNIST handwritten digit dataset, SNN-PC developed hierarchical internal representations and was able to reconstruct samples it had not seen during training. SNN-PC suggests biologically plausible mechanisms by which the brain may perform perceptual inference and learning in an unsupervised manner. In addition, it may be used in neuromorphic applications that can utilize its energy-efficient, event-driven, local learning, and parallel information processing nature.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140564352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Early detection and diagnosis of Autism Spectrum Disorder (ASD) can significantly improve the quality of life for affected individuals. Identifying ASD based on brain functional connectivity (FC) poses a challenge due to the high heterogeneity of subjects’ fMRI data in different sites. Meanwhile, deep learning algorithms show efficacy in ASD identification but lack interpretability. In this paper, a novel approach for ASD recognition is proposed based on graph attention networks. Specifically, we treat the region of interest (ROI) of the subjects as node, conduct wavelet decomposition of the BOLD signal in each ROI, extract wavelet features, and utilize them along with the mean and variance of the BOLD signal as node features, and the optimized FC matrix as the adjacency matrix, respectively. We then employ the self-attention mechanism to capture long-range dependencies among features. To enhance interpretability, the node-selection pooling layers are designed to determine the importance of ROI for prediction. The proposed framework are applied to fMRI data of children (younger than 12 years old) from the Autism Brain Imaging Data Exchange datasets. Promising results demonstrate superior performance compared to recent similar studies. The obtained ROI detection results exhibit high correspondence with previous studies and offer good interpretability.
自闭症谱系障碍(ASD)的早期检测和诊断可大大改善患者的生活质量。由于受试者不同部位的 fMRI 数据具有高度异质性,因此基于大脑功能连接(FC)来识别 ASD 是一项挑战。与此同时,深度学习算法在 ASD 识别中显示出功效,但缺乏可解释性。本文提出了一种基于图注意力网络的新型 ASD 识别方法。具体来说,我们将受试者的兴趣区域(ROI)作为节点,对每个ROI中的BOLD信号进行小波分解,提取小波特征,并将其与BOLD信号的均值和方差分别作为节点特征和优化后的FC矩阵作为邻接矩阵。然后,我们利用自注意机制来捕捉特征之间的长程依赖关系。为了增强可解释性,我们设计了节点选择池层,以确定 ROI 对预测的重要性。我们将提出的框架应用于自闭症脑成像数据交换数据集中的儿童(12 岁以下)fMRI 数据。与最近的类似研究相比,有希望的结果显示出更优越的性能。所获得的 ROI 检测结果与之前的研究具有很高的对应性,并提供了良好的可解释性。
{"title":"A novel approach for ASD recognition based on graph attention networks","authors":"Canhua Wang, Zhiyong Xiao, Yilu Xu, Qi Zhang, Jingfang Chen","doi":"10.3389/fncom.2024.1388083","DOIUrl":"https://doi.org/10.3389/fncom.2024.1388083","url":null,"abstract":"Early detection and diagnosis of Autism Spectrum Disorder (ASD) can significantly improve the quality of life for affected individuals. Identifying ASD based on brain functional connectivity (FC) poses a challenge due to the high heterogeneity of subjects’ fMRI data in different sites. Meanwhile, deep learning algorithms show efficacy in ASD identification but lack interpretability. In this paper, a novel approach for ASD recognition is proposed based on graph attention networks. Specifically, we treat the region of interest (ROI) of the subjects as node, conduct wavelet decomposition of the BOLD signal in each ROI, extract wavelet features, and utilize them along with the mean and variance of the BOLD signal as node features, and the optimized FC matrix as the adjacency matrix, respectively. We then employ the self-attention mechanism to capture long-range dependencies among features. To enhance interpretability, the node-selection pooling layers are designed to determine the importance of ROI for prediction. The proposed framework are applied to fMRI data of children (younger than 12 years old) from the Autism Brain Imaging Data Exchange datasets. Promising results demonstrate superior performance compared to recent similar studies. The obtained ROI detection results exhibit high correspondence with previous studies and offer good interpretability.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140564355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-09DOI: 10.3389/fncom.2024.1209082
Qianqian Zhang, Yueyi Zhang, Ning Liu, Xiaoyan Sun
IntroductionFace recognition has been a longstanding subject of interest in the fields of cognitive neuroscience and computer vision research. One key focus has been to understand the relative importance of different facial features in identifying individuals. Previous studies in humans have demonstrated the crucial role of eyebrows in face recognition, potentially even surpassing the importance of the eyes. However, eyebrows are not only vital for face recognition but also play a significant role in recognizing facial expressions and intentions, which might occur simultaneously and influence the face recognition process.MethodsTo address these challenges, our current study aimed to leverage the power of deep convolutional neural networks (DCNNs), an artificial face recognition system, which can be specifically tailored for face recognition tasks. In this study, we investigated the relative importance of various facial features in face recognition by selectively blocking feature information from the input to the DCNN. Additionally, we conducted experiments in which we systematically blurred the information related to eyebrows to varying degrees.ResultsOur findings aligned with previous human research, revealing that eyebrows are the most critical feature for face recognition, followed by eyes, mouth, and nose, in that order. The results demonstrated that the presence of eyebrows was more crucial than their specific high-frequency details, such as edges and textures, compared to other facial features, where the details also played a significant role. Furthermore, our results revealed that, unlike other facial features, the activation map indicated that the significance of eyebrows areas could not be readily adjusted to compensate for the absence of eyebrow information. This finding explains why masking eyebrows led to more significant deficits in face recognition performance. Additionally, we observed a synergistic relationship among facial features, providing evidence for holistic processing of faces within the DCNN.DiscussionOverall, our study sheds light on the underlying mechanisms of face recognition and underscores the potential of using DCNNs as valuable tools for further exploration in this field.
{"title":"Understanding of facial features in face perception: insights from deep convolutional neural networks","authors":"Qianqian Zhang, Yueyi Zhang, Ning Liu, Xiaoyan Sun","doi":"10.3389/fncom.2024.1209082","DOIUrl":"https://doi.org/10.3389/fncom.2024.1209082","url":null,"abstract":"IntroductionFace recognition has been a longstanding subject of interest in the fields of cognitive neuroscience and computer vision research. One key focus has been to understand the relative importance of different facial features in identifying individuals. Previous studies in humans have demonstrated the crucial role of eyebrows in face recognition, potentially even surpassing the importance of the eyes. However, eyebrows are not only vital for face recognition but also play a significant role in recognizing facial expressions and intentions, which might occur simultaneously and influence the face recognition process.MethodsTo address these challenges, our current study aimed to leverage the power of deep convolutional neural networks (DCNNs), an artificial face recognition system, which can be specifically tailored for face recognition tasks. In this study, we investigated the relative importance of various facial features in face recognition by selectively blocking feature information from the input to the DCNN. Additionally, we conducted experiments in which we systematically blurred the information related to eyebrows to varying degrees.ResultsOur findings aligned with previous human research, revealing that eyebrows are the most critical feature for face recognition, followed by eyes, mouth, and nose, in that order. The results demonstrated that the presence of eyebrows was more crucial than their specific high-frequency details, such as edges and textures, compared to other facial features, where the details also played a significant role. Furthermore, our results revealed that, unlike other facial features, the activation map indicated that the significance of eyebrows areas could not be readily adjusted to compensate for the absence of eyebrow information. This finding explains why masking eyebrows led to more significant deficits in face recognition performance. Additionally, we observed a synergistic relationship among facial features, providing evidence for holistic processing of faces within the DCNN.DiscussionOverall, our study sheds light on the underlying mechanisms of face recognition and underscores the potential of using DCNNs as valuable tools for further exploration in this field.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140603308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
According to experts in neurology, brain tumours pose a serious risk to human health. The clinical identification and treatment of brain tumours rely heavily on accurate segmentation. The varied sizes, forms, and locations of brain tumours make accurate automated segmentation a formidable obstacle in the field of neuroscience. U-Net, with its computational intelligence and concise design, has lately been the go-to model for fixing medical picture segmentation issues. Problems with restricted local receptive fields, lost spatial information, and inadequate contextual information are still plaguing artificial intelligence. A convolutional neural network (CNN) and a Mel-spectrogram are the basis of this cough recognition technique. First, we combine the voice in a variety of intricate settings and improve the audio data. After that, we preprocess the data to make sure its length is consistent and create a Mel-spectrogram out of it. A novel model for brain tumor segmentation (BTS), Intelligence Cascade U-Net (ICU-Net), is proposed to address these issues. It is built on dynamic convolution and uses a non-local attention mechanism. In order to reconstruct more detailed spatial information on brain tumours, the principal design is a two-stage cascade of 3DU-Net. The paper’s objective is to identify the best learnable parameters that will maximize the likelihood of the data. After the network’s ability to gather long-distance dependencies for AI, Expectation–Maximization is applied to the cascade network’s lateral connections, enabling it to leverage contextual data more effectively. Lastly, to enhance the network’s ability to capture local characteristics, dynamic convolutions with local adaptive capabilities are used in place of the cascade network’s standard convolutions. We compared our results to those of other typical methods and ran extensive testing utilising the publicly available BraTS 2019/2020 datasets. The suggested method performs well on tasks involving BTS, according to the experimental data. The Dice scores for tumor core (TC), complete tumor, and enhanced tumor segmentation BraTS 2019/2020 validation sets are 0.897/0.903, 0.826/0.828, and 0.781/0.786, respectively, indicating high performance in BTS.
{"title":"Brain tumor segmentation using neuro-technology enabled intelligence-cascaded U-Net model","authors":"Haewon Byeon, Mohannad Al-Kubaisi, Ashit Kumar Dutta, Faisal Alghayadh, Mukesh Soni, Manisha Bhende, Venkata Chunduri, K. Suresh Babu, Rubal Jeet","doi":"10.3389/fncom.2024.1391025","DOIUrl":"https://doi.org/10.3389/fncom.2024.1391025","url":null,"abstract":"According to experts in neurology, brain tumours pose a serious risk to human health. The clinical identification and treatment of brain tumours rely heavily on accurate segmentation. The varied sizes, forms, and locations of brain tumours make accurate automated segmentation a formidable obstacle in the field of neuroscience. U-Net, with its computational intelligence and concise design, has lately been the go-to model for fixing medical picture segmentation issues. Problems with restricted local receptive fields, lost spatial information, and inadequate contextual information are still plaguing artificial intelligence. A convolutional neural network (CNN) and a Mel-spectrogram are the basis of this cough recognition technique. First, we combine the voice in a variety of intricate settings and improve the audio data. After that, we preprocess the data to make sure its length is consistent and create a Mel-spectrogram out of it. A novel model for brain tumor segmentation (BTS), Intelligence Cascade U-Net (ICU-Net), is proposed to address these issues. It is built on dynamic convolution and uses a non-local attention mechanism. In order to reconstruct more detailed spatial information on brain tumours, the principal design is a two-stage cascade of 3DU-Net. The paper’s objective is to identify the best learnable parameters that will maximize the likelihood of the data. After the network’s ability to gather long-distance dependencies for AI, Expectation–Maximization is applied to the cascade network’s lateral connections, enabling it to leverage contextual data more effectively. Lastly, to enhance the network’s ability to capture local characteristics, dynamic convolutions with local adaptive capabilities are used in place of the cascade network’s standard convolutions. We compared our results to those of other typical methods and ran extensive testing utilising the publicly available BraTS 2019/2020 datasets. The suggested method performs well on tasks involving BTS, according to the experimental data. The Dice scores for tumor core (TC), complete tumor, and enhanced tumor segmentation BraTS 2019/2020 validation sets are 0.897/0.903, 0.826/0.828, and 0.781/0.786, respectively, indicating high performance in BTS.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140564373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-02DOI: 10.3389/fncom.2024.1367712
Howard Schneider
The Causal Cognitive Architecture is a brain-inspired cognitive architecture developed from the hypothesis that the navigation circuits in the ancestors of mammals duplicated to eventually form the neocortex. Thus, millions of neocortical minicolumns are functionally modeled in the architecture as millions of “navigation maps.” An investigation of a cognitive architecture based on these navigation maps has previously shown that modest changes in the architecture allow the ready emergence of human cognitive abilities such as grounded, full causal decision-making, full analogical reasoning, and near-full compositional language abilities. In this study, additional biologically plausible modest changes to the architecture are considered and show the emergence of super-human planning abilities. The architecture should be considered as a viable alternative pathway toward the development of more advanced artificial intelligence, as well as to give insight into the emergence of natural human intelligence.
{"title":"The emergence of enhanced intelligence in a brain-inspired cognitive architecture","authors":"Howard Schneider","doi":"10.3389/fncom.2024.1367712","DOIUrl":"https://doi.org/10.3389/fncom.2024.1367712","url":null,"abstract":"The Causal Cognitive Architecture is a brain-inspired cognitive architecture developed from the hypothesis that the navigation circuits in the ancestors of mammals duplicated to eventually form the neocortex. Thus, millions of neocortical minicolumns are functionally modeled in the architecture as millions of “navigation maps.” An investigation of a cognitive architecture based on these navigation maps has previously shown that modest changes in the architecture allow the ready emergence of human cognitive abilities such as grounded, full causal decision-making, full analogical reasoning, and near-full compositional language abilities. In this study, additional biologically plausible modest changes to the architecture are considered and show the emergence of super-human planning abilities. The architecture should be considered as a viable alternative pathway toward the development of more advanced artificial intelligence, as well as to give insight into the emergence of natural human intelligence.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140884585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-22DOI: 10.3389/fncom.2024.1357607
Raúl Fernández-Ruiz, Esther Núñez-Vidal, Irene Hidalgo-delaguía, Elena Garayzábal-Heinze, Agustín Álvarez-Marquina, Rafael Martínez-Olalla, Daniel Palacios-Alonso
This research work introduces a novel, nonintrusive method for the automatic identification of Smith–Magenis syndrome, traditionally studied through genetic markers. The method utilizes cepstral peak prominence and various machine learning techniques, relying on a single metric computed by the research group. The performance of these techniques is evaluated across two case studies, each employing a unique data preprocessing approach. A proprietary data “windowing” technique is also developed to derive a more representative dataset. To address class imbalance in the dataset, the synthetic minority oversampling technique (SMOTE) is applied for data augmentation. The application of these preprocessing techniques has yielded promising results from a limited initial dataset. The study concludes that the k-nearest neighbors and linear discriminant analysis perform best, and that cepstral peak prominence is a promising measure for identifying Smith–Magenis syndrome.
{"title":"Identification of Smith–Magenis syndrome cases through an experimental evaluation of machine learning methods","authors":"Raúl Fernández-Ruiz, Esther Núñez-Vidal, Irene Hidalgo-delaguía, Elena Garayzábal-Heinze, Agustín Álvarez-Marquina, Rafael Martínez-Olalla, Daniel Palacios-Alonso","doi":"10.3389/fncom.2024.1357607","DOIUrl":"https://doi.org/10.3389/fncom.2024.1357607","url":null,"abstract":"This research work introduces a novel, nonintrusive method for the automatic identification of Smith–Magenis syndrome, traditionally studied through genetic markers. The method utilizes cepstral peak prominence and various machine learning techniques, relying on a single metric computed by the research group. The performance of these techniques is evaluated across two case studies, each employing a unique data preprocessing approach. A proprietary data “windowing” technique is also developed to derive a more representative dataset. To address class imbalance in the dataset, the synthetic minority oversampling technique (SMOTE) is applied for data augmentation. The application of these preprocessing techniques has yielded promising results from a limited initial dataset. The study concludes that the k-nearest neighbors and linear discriminant analysis perform best, and that cepstral peak prominence is a promising measure for identifying Smith–Magenis syndrome.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140198474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-22DOI: 10.3389/fncom.2024.1349408
Giulio Sandini, Alessandra Sciutti, Pietro Morasso
The trend in industrial/service robotics is to develop robots that can cooperate with people, interacting with them in an autonomous, safe and purposive way. These are the fundamental elements characterizing the fourth and the fifth industrial revolutions (4IR, 5IR): the crucial innovation is the adoption of intelligent technologies that can allow the development of cyber-physical systems, similar if not superior to humans. The common wisdom is that intelligence might be provided by AI (Artificial Intelligence), a claim that is supported more by media coverage and commercial interests than by solid scientific evidence. AI is currently conceived in a quite broad sense, encompassing LLMs and a lot of other things, without any unifying principle, but self-motivating for the success in various areas. The current view of AI robotics mostly follows a purely disembodied approach that is consistent with the old-fashioned, Cartesian mind-body dualism, reflected in the software-hardware distinction inherent to the von Neumann computing architecture. The working hypothesis of this position paper is that the road to the next generation of autonomous robotic agents with cognitive capabilities requires a fully brain-inspired, embodied cognitive approach that avoids the trap of mind-body dualism and aims at the full integration of Bodyware and Cogniware. We name this approach Artificial Cognition (ACo) and ground it in Cognitive Neuroscience. It is specifically focused on proactive knowledge acquisition based on bidirectional human-robot interaction: the practical advantage is to enhance generalization and explainability. Moreover, we believe that a brain-inspired network of interactions is necessary for allowing humans to cooperate with artificial cognitive agents, building a growing level of personal trust and reciprocal accountability: this is clearly missing, although actively sought, in current AI. The ACo approach is a work in progress that can take advantage of a number of research threads, some of them antecedent the early attempts to define AI concepts and methods. In the rest of the paper we will consider some of the building blocks that need to be re-visited in a unitary framework: the principles of developmental robotics, the methods of action representation with prospection capabilities, and the crucial role of social interaction.
{"title":"Artificial cognition vs. artificial intelligence for next-generation autonomous robotic agents","authors":"Giulio Sandini, Alessandra Sciutti, Pietro Morasso","doi":"10.3389/fncom.2024.1349408","DOIUrl":"https://doi.org/10.3389/fncom.2024.1349408","url":null,"abstract":"The trend in industrial/service robotics is to develop robots that can cooperate with people, interacting with them in an autonomous, safe and purposive way. These are the fundamental elements characterizing the fourth and the fifth industrial revolutions (4IR, 5IR): the crucial innovation is the adoption of intelligent technologies that can allow the development of <jats:italic>cyber-physical systems</jats:italic>, similar if not superior to humans. The common wisdom is that intelligence might be provided by AI (Artificial Intelligence), a claim that is supported more by media coverage and commercial interests than by solid scientific evidence. AI is currently conceived in a quite broad sense, encompassing LLMs and a lot of other things, without any unifying principle, but self-motivating for the success in various areas. The current view of AI robotics mostly follows a purely disembodied approach that is consistent with the old-fashioned, Cartesian mind-body dualism, reflected in the software-hardware distinction inherent to the von Neumann computing architecture. The working hypothesis of this position paper is that the road to the next generation of autonomous robotic agents with cognitive capabilities requires a fully brain-inspired, embodied cognitive approach that avoids the trap of mind-body dualism and aims at the full integration of <jats:italic>Bodyware</jats:italic> and <jats:italic>Cogniware.</jats:italic> We name this approach Artificial Cognition (ACo) and ground it in Cognitive Neuroscience. It is specifically focused on proactive knowledge acquisition based on bidirectional human-robot interaction: the practical advantage is to enhance generalization and explainability. Moreover, we believe that a brain-inspired network of interactions is necessary for allowing humans to cooperate with artificial cognitive agents, building a growing level of personal trust and reciprocal accountability: this is clearly missing, although actively sought, in current AI. The ACo approach is a work in progress that can take advantage of a number of research threads, some of them antecedent the early attempts to define AI concepts and methods. In the rest of the paper we will consider some of the building blocks that need to be re-visited in a unitary framework: the principles of developmental robotics, the methods of action representation with prospection capabilities, and the crucial role of social interaction.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140198849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-21DOI: 10.3389/fncom.2024.1340251
Sayani Mallick, Veeky Baths
IntroductionEpilepsy is a chronic neurological disorder characterized by abnormal electrical activity in the brain, often leading to recurrent seizures. With 50 million people worldwide affected by epilepsy, there is a pressing need for efficient and accurate methods to detect and diagnose seizures. Electroencephalogram (EEG) signals have emerged as a valuable tool in detecting epilepsy and other neurological disorders. Traditionally, the process of analyzing EEG signals for seizure detection has relied on manual inspection by experts, which is time-consuming, labor-intensive, and susceptible to human error. To address these limitations, researchers have turned to machine learning and deep learning techniques to automate the seizure detection process.MethodsIn this work, we propose a novel method for epileptic seizure detection, leveraging the power of 1-D Convolutional layers in combination with Bidirectional Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) and Average pooling Layer as a single unit. This unit is repeatedly used in the proposed model to extract the features. The features are then passed to the Dense layers to predict the class of the EEG waveform. The performance of the proposed model is verified on the Bonn dataset. To assess the robustness and generalizability of our proposed architecture, we employ five-fold cross-validation. By dividing the dataset into five subsets and iteratively training and testing the model on different combinations of these subsets, we obtain robust performance measures, including accuracy, sensitivity, and specificity.ResultsOur proposed model achieves an accuracy of 99–100% for binary classifications into seizure and normal waveforms, 97.2%–99.2% accuracy for classifications into normal-ictal-seizure waveforms, 96.2%–98.4% accuracy for four class classification and accuracy of 95.81%–98% for five class classification.DiscussionOur proposed models have achieved significant improvements in the performance metrics for the binary classifications and multiclass classifications. We demonstrate the effectiveness of the proposed architecture in accurately detecting epileptic seizures from EEG signals by using EEG signals of varying lengths. The results indicate its potential as a reliable and efficient tool for automated seizure detection, paving the way for improved diagnosis and management of epilepsy.
{"title":"Novel deep learning framework for detection of epileptic seizures using EEG signals","authors":"Sayani Mallick, Veeky Baths","doi":"10.3389/fncom.2024.1340251","DOIUrl":"https://doi.org/10.3389/fncom.2024.1340251","url":null,"abstract":"IntroductionEpilepsy is a chronic neurological disorder characterized by abnormal electrical activity in the brain, often leading to recurrent seizures. With 50 million people worldwide affected by epilepsy, there is a pressing need for efficient and accurate methods to detect and diagnose seizures. Electroencephalogram (EEG) signals have emerged as a valuable tool in detecting epilepsy and other neurological disorders. Traditionally, the process of analyzing EEG signals for seizure detection has relied on manual inspection by experts, which is time-consuming, labor-intensive, and susceptible to human error. To address these limitations, researchers have turned to machine learning and deep learning techniques to automate the seizure detection process.MethodsIn this work, we propose a novel method for epileptic seizure detection, leveraging the power of 1-D Convolutional layers in combination with Bidirectional Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) and Average pooling Layer as a single unit. This unit is repeatedly used in the proposed model to extract the features. The features are then passed to the Dense layers to predict the class of the EEG waveform. The performance of the proposed model is verified on the Bonn dataset. To assess the robustness and generalizability of our proposed architecture, we employ five-fold cross-validation. By dividing the dataset into five subsets and iteratively training and testing the model on different combinations of these subsets, we obtain robust performance measures, including accuracy, sensitivity, and specificity.ResultsOur proposed model achieves an accuracy of 99–100% for binary classifications into seizure and normal waveforms, 97.2%–99.2% accuracy for classifications into normal-ictal-seizure waveforms, 96.2%–98.4% accuracy for four class classification and accuracy of 95.81%–98% for five class classification.DiscussionOur proposed models have achieved significant improvements in the performance metrics for the binary classifications and multiclass classifications. We demonstrate the effectiveness of the proposed architecture in accurately detecting epileptic seizures from EEG signals by using EEG signals of varying lengths. The results indicate its potential as a reliable and efficient tool for automated seizure detection, paving the way for improved diagnosis and management of epilepsy.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140198468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-19DOI: 10.3389/fncom.2024.1384924
Yanfang Hou, Hui Tian, Chengmao Wang
A good intelligent learning model is the key to complete recognition of scene information and accurate recognition of specific targets in intelligent unmanned system. This study proposes a new associative memory model based on the semi-tensor product (STP) of matrices, to address the problems of information storage capacity and association. First, some preliminaries are introduced to facilitate modeling, and the problem of information storage capacity in the application of discrete Hopfield neural network (DHNN) to associative memory is pointed out. Second, learning modes are equivalently converted into their algebraic forms by using STP. A memory matrix is constructed to accurately remember these learning modes. Furthermore, an algorithm for updating the memory matrix is developed to improve the association ability of the model. And another algorithm is provided to show how our model learns and associates. Finally, some examples are given to demonstrate the effectiveness and advantages of our results. Compared with mainstream DHNNs, our model can remember learning modes more accurately with fewer nodes.
{"title":"A novel associative memory model based on semi-tensor product (STP)","authors":"Yanfang Hou, Hui Tian, Chengmao Wang","doi":"10.3389/fncom.2024.1384924","DOIUrl":"https://doi.org/10.3389/fncom.2024.1384924","url":null,"abstract":"A good intelligent learning model is the key to complete recognition of scene information and accurate recognition of specific targets in intelligent unmanned system. This study proposes a new associative memory model based on the semi-tensor product (STP) of matrices, to address the problems of information storage capacity and association. First, some preliminaries are introduced to facilitate modeling, and the problem of information storage capacity in the application of discrete Hopfield neural network (DHNN) to associative memory is pointed out. Second, learning modes are equivalently converted into their algebraic forms by using STP. A memory matrix is constructed to accurately remember these learning modes. Furthermore, an algorithm for updating the memory matrix is developed to improve the association ability of the model. And another algorithm is provided to show how our model learns and associates. Finally, some examples are given to demonstrate the effectiveness and advantages of our results. Compared with mainstream DHNNs, our model can remember learning modes more accurately with fewer nodes.","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140167381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}