首页 > 最新文献

Frontiers in Neurorobotics最新文献

英文 中文
RL-QPSO net: deep reinforcement learning-enhanced QPSO for efficient mobile robot path planning. RL-QPSO网络:用于高效移动机器人路径规划的深度强化学习增强QPSO。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-08 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1464572
Yang Jing, Li Weiya

Introduction: Path planning in complex and dynamic environments poses a significant challenge in the field of mobile robotics. Traditional path planning methods such as genetic algorithms, Dijkstra's algorithm, and Floyd's algorithm typically rely on deterministic search strategies, which can lead to local optima and lack global search capabilities in dynamic settings. These methods have high computational costs and are not efficient for real-time applications.

Methods: To address these issues, this paper presents a Quantum-behaved Particle Swarm Optimization model enhanced by deep reinforcement learning (RL-QPSO Net) aimed at improving global optimality and adaptability in path planning. The RL-QPSO Net combines quantum-inspired particle swarm optimization (QPSO) and deep reinforcement learning (DRL) modules through a dual control mechanism to achieve path optimization and environmental adaptation. The QPSO module is responsible for global path optimization, using quantum mechanics to avoid local optima, while the DRL module adjusts strategies in real-time based on environmental feedback, thus enhancing decision-making capabilities in complex high-dimensional scenarios.

Results and discussion: Experiments were conducted on multiple datasets, including Cityscapes, NYU Depth V2, Mapillary Vistas, and ApolloScape, and the results showed that RL-QPSO Net outperforms traditional methods in terms of accuracy, computational efficiency, and model complexity. This method demonstrated significant improvements in accuracy and computational efficiency, providing an effective path planning solution for real-time applications in complex environments for mobile robots. In the future, this method could be further extended to resource-limited environments to achieve broader practical applications.

在移动机器人领域中,复杂动态环境下的路径规划是一个重要的挑战。传统的路径规划方法,如遗传算法、Dijkstra算法和Floyd算法,通常依赖于确定性搜索策略,在动态环境下可能导致局部最优,缺乏全局搜索能力。这些方法计算成本高,在实时应用中效率不高。方法:为了解决这些问题,本文提出了一种基于深度强化学习的量子粒子群优化模型(RL-QPSO Net),旨在提高路径规划的全局最优性和适应性。RL-QPSO网络结合量子启发粒子群优化(QPSO)和深度强化学习(DRL)模块,通过双重控制机制实现路径优化和环境自适应。QPSO模块负责全局路径优化,利用量子力学避免局部最优,DRL模块根据环境反馈实时调整策略,增强复杂高维场景下的决策能力。结果与讨论:在cityscape、NYU Depth V2、Mapillary远景和ApolloScape等多个数据集上进行了实验,结果表明RL-QPSO Net在准确率、计算效率和模型复杂度方面都优于传统方法。该方法在精度和计算效率方面有显著提高,为移动机器人在复杂环境下的实时应用提供了有效的路径规划解决方案。未来,该方法可以进一步推广到资源有限的环境中,实现更广泛的实际应用。
{"title":"RL-QPSO net: deep reinforcement learning-enhanced QPSO for efficient mobile robot path planning.","authors":"Yang Jing, Li Weiya","doi":"10.3389/fnbot.2024.1464572","DOIUrl":"10.3389/fnbot.2024.1464572","url":null,"abstract":"<p><strong>Introduction: </strong>Path planning in complex and dynamic environments poses a significant challenge in the field of mobile robotics. Traditional path planning methods such as genetic algorithms, Dijkstra's algorithm, and Floyd's algorithm typically rely on deterministic search strategies, which can lead to local optima and lack global search capabilities in dynamic settings. These methods have high computational costs and are not efficient for real-time applications.</p><p><strong>Methods: </strong>To address these issues, this paper presents a Quantum-behaved Particle Swarm Optimization model enhanced by deep reinforcement learning (RL-QPSO Net) aimed at improving global optimality and adaptability in path planning. The RL-QPSO Net combines quantum-inspired particle swarm optimization (QPSO) and deep reinforcement learning (DRL) modules through a dual control mechanism to achieve path optimization and environmental adaptation. The QPSO module is responsible for global path optimization, using quantum mechanics to avoid local optima, while the DRL module adjusts strategies in real-time based on environmental feedback, thus enhancing decision-making capabilities in complex high-dimensional scenarios.</p><p><strong>Results and discussion: </strong>Experiments were conducted on multiple datasets, including Cityscapes, NYU Depth V2, Mapillary Vistas, and ApolloScape, and the results showed that RL-QPSO Net outperforms traditional methods in terms of accuracy, computational efficiency, and model complexity. This method demonstrated significant improvements in accuracy and computational efficiency, providing an effective path planning solution for real-time applications in complex environments for mobile robots. In the future, this method could be further extended to resource-limited environments to achieve broader practical applications.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1464572"},"PeriodicalIF":2.6,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11750848/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143023169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Directional Spatial and Spectral Attention Network (DSSA Net) for EEG-based emotion recognition. 基于脑电图的定向空间与频谱注意网络(DSSA Net)。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-07 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1481746
Jiyao Liu, Lang He, Haifeng Chen, Dongmei Jiang

Significant strides have been made in emotion recognition from Electroencephalography (EEG) signals. However, effectively modeling the diverse spatial, spectral, and temporal features of multi-channel brain signals remains a challenge. This paper proposes a novel framework, the Directional Spatial and Spectral Attention Network (DSSA Net), which enhances emotion recognition accuracy by capturing critical spatial-spectral-temporal features from EEG signals. The framework consists of three modules: Positional Attention (PA), Spectral Attention (SA), and Temporal Attention (TA). The PA module includes Vertical Attention (VA) and Horizontal Attention (HA) branches, designed to detect active brain regions from different orientations. Experimental results on three benchmark EEG datasets demonstrate that DSSA Net outperforms most competitive methods. On the SEED and SEED-IV datasets, it achieves accuracies of 96.61% and 85.07% for subject-dependent emotion recognition, respectively, and 87.03% and 75.86% for subject-independent recognition. On the DEAP dataset, it attains accuracies of 94.97% for valence and 94.73% for arousal. These results showcase the framework's ability to leverage both spatial and spectral differences across brain hemispheres and regions, enhancing classification accuracy for emotion recognition.

从脑电图(EEG)信号中识别情绪已经取得了重大进展。然而,如何有效地模拟多通道大脑信号的空间、频谱和时间特征仍然是一个挑战。本文提出了一种新的框架——定向空间和频谱注意网络(DSSA Net),该网络通过捕获脑电图信号中的关键空间-频谱-时间特征来提高情绪识别的准确性。该框架由三个模块组成:位置注意(PA)、频谱注意(SA)和时间注意(TA)。PA模块包括垂直注意(VA)和水平注意(HA)分支,旨在从不同方向检测活跃的大脑区域。在三个基准脑电数据集上的实验结果表明,DSSA网络优于大多数竞争方法。在SEED和SEED- iv数据集上,主体依赖情感识别的准确率分别为96.61%和85.07%,主体独立情感识别的准确率分别为87.03%和75.86%。在DEAP数据集上,它的效价准确率为94.97%,唤醒准确率为94.73%。这些结果表明,该框架能够利用大脑半球和区域之间的空间和光谱差异,提高情感识别的分类准确性。
{"title":"Directional Spatial and Spectral Attention Network (DSSA Net) for EEG-based emotion recognition.","authors":"Jiyao Liu, Lang He, Haifeng Chen, Dongmei Jiang","doi":"10.3389/fnbot.2024.1481746","DOIUrl":"10.3389/fnbot.2024.1481746","url":null,"abstract":"<p><p>Significant strides have been made in emotion recognition from Electroencephalography (EEG) signals. However, effectively modeling the diverse spatial, spectral, and temporal features of multi-channel brain signals remains a challenge. This paper proposes a novel framework, the Directional Spatial and Spectral Attention Network (DSSA Net), which enhances emotion recognition accuracy by capturing critical spatial-spectral-temporal features from EEG signals. The framework consists of three modules: Positional Attention (PA), Spectral Attention (SA), and Temporal Attention (TA). The PA module includes Vertical Attention (VA) and Horizontal Attention (HA) branches, designed to detect active brain regions from different orientations. Experimental results on three benchmark EEG datasets demonstrate that DSSA Net outperforms most competitive methods. On the SEED and SEED-IV datasets, it achieves accuracies of 96.61% and 85.07% for subject-dependent emotion recognition, respectively, and 87.03% and 75.86% for subject-independent recognition. On the DEAP dataset, it attains accuracies of 94.97% for valence and 94.73% for arousal. These results showcase the framework's ability to leverage both spatial and spectral differences across brain hemispheres and regions, enhancing classification accuracy for emotion recognition.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1481746"},"PeriodicalIF":2.6,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11747697/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143004232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KalmanFormer: using transformer to model the Kalman Gain in Kalman Filters. 卡尔曼前:利用变压器对卡尔曼滤波器中的卡尔曼增益进行建模。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-07 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1460255
Siyuan Shen, Jichen Chen, Guanfeng Yu, Zhengjun Zhai, Pujie Han

Introduction: Tracking the hidden states of dynamic systems is a fundamental task in signal processing. Recursive Kalman Filters (KF) are widely regarded as an efficient solution for linear and Gaussian systems, offering low computational complexity. However, real-world applications often involve non-linear dynamics, making it challenging for traditional Kalman Filters to achieve accurate state estimation. Additionally, the accurate modeling of system dynamics and noise in practical scenarios is often difficult. To address these limitations, we propose the KalmanFormer, a hybrid model-driven and data-driven state estimator. By leveraging data, the KalmanFormer promotes the performance of state estimation under non-linear conditions and partial information scenarios.

Methods: The proposed KalmanFormer integrates classical Kalman Filter with a Transformer framework. Specifically, it utilizes the Transformer to learn the Kalman Gain directly from data without requiring prior knowledge of noise parameters. The learned Kalman Gain is then incorporated into the standard Kalman Filter workflow, enabling the system to better handle non-linearities and model mismatches. The hybrid approach combines the strengths of data-driven learning and model-driven methodologies to achieve robust state estimation.

Results and discussion: To evaluate the effectiveness of KalmanFormer, we conducted numerical experiments in both synthetic and real-world dataset. The results demonstrate that KalmanFormer outperforms the classical Extended Kalman Filter (EKF) in the same settings. It achieves superior accuracy in tracking hidden states, demonstrating resilience to non-linearities and imprecise system models.

动态系统的隐藏状态跟踪是信号处理中的一项基本任务。递归卡尔曼滤波器(KF)被广泛认为是线性和高斯系统的有效解决方案,具有较低的计算复杂度。然而,现实世界的应用往往涉及非线性动力学,这使得传统的卡尔曼滤波器难以实现准确的状态估计。此外,在实际情况下,系统动力学和噪声的准确建模往往是困难的。为了解决这些限制,我们提出了KalmanFormer,一个混合模型驱动和数据驱动的状态估计器。通过利用数据,KalmanFormer提高了非线性条件和部分信息场景下状态估计的性能。方法:提出的KalmanFormer将经典卡尔曼滤波器与变压器框架相结合。具体来说,它利用变压器直接从数据中学习卡尔曼增益,而不需要事先知道噪声参数。然后将学习到的卡尔曼增益合并到标准卡尔曼滤波工作流程中,使系统能够更好地处理非线性和模型不匹配。混合方法结合了数据驱动学习和模型驱动方法的优势,以实现鲁棒状态估计。结果和讨论:为了评估KalmanFormer的有效性,我们在合成数据集和真实数据集上进行了数值实验。结果表明,在相同的条件下,卡尔曼前滤波器优于经典的扩展卡尔曼滤波器(EKF)。它在跟踪隐藏状态方面达到了卓越的精度,展示了对非线性和不精确系统模型的弹性。
{"title":"KalmanFormer: using transformer to model the Kalman Gain in Kalman Filters.","authors":"Siyuan Shen, Jichen Chen, Guanfeng Yu, Zhengjun Zhai, Pujie Han","doi":"10.3389/fnbot.2024.1460255","DOIUrl":"10.3389/fnbot.2024.1460255","url":null,"abstract":"<p><strong>Introduction: </strong>Tracking the hidden states of dynamic systems is a fundamental task in signal processing. Recursive Kalman Filters (KF) are widely regarded as an efficient solution for linear and Gaussian systems, offering low computational complexity. However, real-world applications often involve non-linear dynamics, making it challenging for traditional Kalman Filters to achieve accurate state estimation. Additionally, the accurate modeling of system dynamics and noise in practical scenarios is often difficult. To address these limitations, we propose the KalmanFormer, a hybrid model-driven and data-driven state estimator. By leveraging data, the KalmanFormer promotes the performance of state estimation under non-linear conditions and partial information scenarios.</p><p><strong>Methods: </strong>The proposed KalmanFormer integrates classical Kalman Filter with a Transformer framework. Specifically, it utilizes the Transformer to learn the Kalman Gain directly from data without requiring prior knowledge of noise parameters. The learned Kalman Gain is then incorporated into the standard Kalman Filter workflow, enabling the system to better handle non-linearities and model mismatches. The hybrid approach combines the strengths of data-driven learning and model-driven methodologies to achieve robust state estimation.</p><p><strong>Results and discussion: </strong>To evaluate the effectiveness of KalmanFormer, we conducted numerical experiments in both synthetic and real-world dataset. The results demonstrate that KalmanFormer outperforms the classical Extended Kalman Filter (EKF) in the same settings. It achieves superior accuracy in tracking hidden states, demonstrating resilience to non-linearities and imprecise system models.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1460255"},"PeriodicalIF":2.6,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11747084/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143004236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSGU-Net: a lightweight multi-scale ghost U-Net for image segmentation. MSGU-Net:用于图像分割的轻量级多尺度幽灵U-Net。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-06 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1480055
Hua Cheng, Yang Zhang, Huangxin Xu, Dingliang Li, Zejian Zhong, Yinchuan Zhao, Zhuo Yan

U-Net and its variants have been widely used in the field of image segmentation. In this paper, a lightweight multi-scale Ghost U-Net (MSGU-Net) network architecture is proposed. This can efficiently and quickly process image segmentation tasks while generating high-quality object masks for each object. The pyramid structure (SPP-Inception) module and ghost module are seamlessly integrated in a lightweight manner. Equipped with an efficient local attention (ELA) mechanism and an attention gate mechanism, they are designed to accurately identify the region of interest (ROI). The SPP-Inception module and ghost module work in tandem to effectively merge multi-scale information derived from low-level features, high-level features, and decoder masks at each stage. Comparative experiments were conducted between the proposed MSGU-Net and state-of-the-art networks on the ISIC2017 and ISIC2018 datasets. In short, compared to the baseline U-Net, our model achieves superior segmentation performance while reducing parameter and computation costs by 96.08 and 92.59%, respectively. Moreover, MSGU-Net can serve as a lightweight deep neural network suitable for deployment across a range of intelligent devices and mobile platforms, offering considerable potential for widespread adoption.

U-Net及其变体在图像分割领域得到了广泛的应用。本文提出了一种轻量级的多尺度幽灵u网(MSGU-Net)网络架构。这可以高效快速地处理图像分割任务,同时为每个对象生成高质量的对象掩码。金字塔结构(SPP-Inception)模块和幽灵模块以轻量级的方式无缝集成。采用高效的局部注意(ELA)机制和注意门机制,精确识别感兴趣区域(ROI)。SPP-Inception模块和ghost模块协同工作,在每个阶段有效地合并来自低级特征、高级特征和解码器掩码的多尺度信息。在ISIC2017和ISIC2018数据集上,将拟议的MSGU-Net与最先进的网络进行了对比实验。简而言之,与基线U-Net相比,我们的模型在参数和计算成本分别降低96.8%和92.59%的情况下取得了更好的分割性能。此外,MSGU-Net可以作为一种轻量级的深度神经网络,适用于各种智能设备和移动平台,具有广泛采用的巨大潜力。
{"title":"MSGU-Net: a lightweight multi-scale ghost U-Net for image segmentation.","authors":"Hua Cheng, Yang Zhang, Huangxin Xu, Dingliang Li, Zejian Zhong, Yinchuan Zhao, Zhuo Yan","doi":"10.3389/fnbot.2024.1480055","DOIUrl":"10.3389/fnbot.2024.1480055","url":null,"abstract":"<p><p>U-Net and its variants have been widely used in the field of image segmentation. In this paper, a lightweight multi-scale Ghost U-Net (MSGU-Net) network architecture is proposed. This can efficiently and quickly process image segmentation tasks while generating high-quality object masks for each object. The pyramid structure (SPP-Inception) module and ghost module are seamlessly integrated in a lightweight manner. Equipped with an efficient local attention (ELA) mechanism and an attention gate mechanism, they are designed to accurately identify the region of interest (ROI). The SPP-Inception module and ghost module work in tandem to effectively merge multi-scale information derived from low-level features, high-level features, and decoder masks at each stage. Comparative experiments were conducted between the proposed MSGU-Net and state-of-the-art networks on the ISIC2017 and ISIC2018 datasets. In short, compared to the baseline U-Net, our model achieves superior segmentation performance while reducing parameter and computation costs by 96.08 and 92.59%, respectively. Moreover, MSGU-Net can serve as a lightweight deep neural network suitable for deployment across a range of intelligent devices and mobile platforms, offering considerable potential for widespread adoption.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1480055"},"PeriodicalIF":2.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11743674/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143004152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Architectural planning robot driven by unsupervised learning for space optimization. 基于无监督学习驱动的建筑规划机器人进行空间优化。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-03 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1517960
Zhe Zhang, Yuchun Zheng

Introduction: Space optimization in architectural planning is a crucial task for maximizing functionality and improving user experience in built environments. Traditional approaches often rely on manual planning or supervised learning techniques, which can be limited by the availability of labeled data and may not adapt well to complex spatial requirements.

Methods: To address these limitations, this paper presents a novel architectural planning robot driven by unsupervised learning for automatic space optimization. The proposed framework integrates spatial attention, clustering, and state refinement mechanisms to autonomously learn and optimize spatial configurations without the need for labeled training data. The spatial attention mechanism focuses the model on key areas within the architectural space, clustering identifies functional zones, and state refinement iteratively improves the spatial layout by adjusting based on learned patterns. Experiments conducted on multiple 3D datasets demonstrate the effectiveness of the proposed approach in achieving optimized space layouts with reduced computational requirements.

Results and discussion: The results show significant improvements in layout efficiency and processing time compared to traditional methods, indicating the potential for real-world applications in automated architectural planning and dynamic space management. This work contributes to the field by providing a scalable solution for architectural space optimization that adapts to diverse spatial requirements through unsupervised learning.

引言:建筑规划中的空间优化是实现建筑环境功能最大化和改善用户体验的关键任务。传统的方法通常依赖于人工规划或监督学习技术,这些技术可能受到标记数据可用性的限制,并且可能无法很好地适应复杂的空间要求。方法:针对这些局限性,本文提出了一种新型的无监督学习驱动的建筑规划机器人,用于自动空间优化。该框架集成了空间注意、聚类和状态细化机制,无需标记训练数据即可自主学习和优化空间配置。空间关注机制将模型聚焦于建筑空间内的关键区域,聚类识别功能区域,状态细化通过学习模式的调整迭代改进空间布局。在多个三维数据集上进行的实验证明了该方法在减少计算需求的情况下实现优化空间布局的有效性。结果与讨论:结果显示,与传统方法相比,该方法在布局效率和处理时间上有了显著的改善,表明了在自动化建筑规划和动态空间管理方面的实际应用潜力。这项工作为建筑空间优化提供了一个可扩展的解决方案,通过无监督学习适应不同的空间需求,从而为该领域做出了贡献。
{"title":"Architectural planning robot driven by unsupervised learning for space optimization.","authors":"Zhe Zhang, Yuchun Zheng","doi":"10.3389/fnbot.2024.1517960","DOIUrl":"10.3389/fnbot.2024.1517960","url":null,"abstract":"<p><strong>Introduction: </strong>Space optimization in architectural planning is a crucial task for maximizing functionality and improving user experience in built environments. Traditional approaches often rely on manual planning or supervised learning techniques, which can be limited by the availability of labeled data and may not adapt well to complex spatial requirements.</p><p><strong>Methods: </strong>To address these limitations, this paper presents a novel architectural planning robot driven by unsupervised learning for automatic space optimization. The proposed framework integrates spatial attention, clustering, and state refinement mechanisms to autonomously learn and optimize spatial configurations without the need for labeled training data. The spatial attention mechanism focuses the model on key areas within the architectural space, clustering identifies functional zones, and state refinement iteratively improves the spatial layout by adjusting based on learned patterns. Experiments conducted on multiple 3D datasets demonstrate the effectiveness of the proposed approach in achieving optimized space layouts with reduced computational requirements.</p><p><strong>Results and discussion: </strong>The results show significant improvements in layout efficiency and processing time compared to traditional methods, indicating the potential for real-world applications in automated architectural planning and dynamic space management. This work contributes to the field by providing a scalable solution for architectural space optimization that adapts to diverse spatial requirements through unsupervised learning.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1517960"},"PeriodicalIF":2.6,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11739300/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143004218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EEG-powered cerebral transformer for athletic performance. 用于运动表现的脑电图驱动的大脑变压器。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-20 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1499734
Qikai Sun

Introduction: In recent years, with advancements in wearable devices and biosignal analysis technologies, sports performance analysis has become an increasingly popular research field, particularly due to the growing demand for real-time monitoring of athletes' conditions in sports training and competitive events. Traditional methods of sports performance analysis typically rely on video data or sensor data for motion recognition. However, unimodal data often fails to fully capture the neural state of athletes, leading to limitations in accuracy and real-time performance when dealing with complex movement patterns. Moreover, these methods struggle with multimodal data fusion, making it difficult to fully leverage the deep information from electroencephalogram (EEG) signals.

Methods: To address these challenges, this paper proposes a "Cerebral Transformer" model based on EEG signals and video data. By employing an adaptive attention mechanism and cross-modal fusion, the model effectively combines EEG signals and video streams to achieve precise recognition and analysis of athletes' movements. The model's effectiveness was validated through experiments on four datasets: SEED, DEAP, eSports Sensors, and MODA. The results show that the proposed model outperforms existing mainstream methods in terms of accuracy, recall, and F1 score, while also demonstrating high computational efficiency.

Results and discussion: The significance of this study lies in providing a more comprehensive and efficient solution for sports performance analysis. Through cross-modal data fusion, it not only improves the accuracy of complex movement recognition but also provides technical support for monitoring athletes' neural states, offering important applications in sports training and medical rehabilitation.

引言:近年来,随着可穿戴设备和生物信号分析技术的进步,运动表现分析已经成为一个越来越受欢迎的研究领域,特别是在运动训练和竞技项目中对运动员状态实时监测的需求越来越大。传统的运动表现分析方法通常依赖于视频数据或传感器数据进行运动识别。然而,单峰数据往往不能完全捕捉运动员的神经状态,导致在处理复杂的运动模式时,准确性和实时性受到限制。此外,这些方法在多模态数据融合方面存在困难,难以充分利用脑电图(EEG)信号中的深层信息。方法:针对这些问题,本文提出了一种基于脑电信号和视频数据的“大脑变压器”模型。该模型采用自适应注意机制和跨模态融合,有效地将脑电信号和视频流结合起来,实现对运动员运动的精确识别和分析。通过SEED、DEAP、eSports Sensors和MODA四个数据集的实验验证了该模型的有效性。结果表明,该模型在准确率、查全率和F1分数方面均优于现有主流方法,同时也显示出较高的计算效率。结果与讨论:本研究的意义在于为运动成绩分析提供更全面、更高效的解决方案。通过跨模态数据融合,不仅提高了复杂动作识别的准确性,而且为监测运动员的神经状态提供了技术支持,在运动训练和医学康复中具有重要的应用价值。
{"title":"EEG-powered cerebral transformer for athletic performance.","authors":"Qikai Sun","doi":"10.3389/fnbot.2024.1499734","DOIUrl":"https://doi.org/10.3389/fnbot.2024.1499734","url":null,"abstract":"<p><strong>Introduction: </strong>In recent years, with advancements in wearable devices and biosignal analysis technologies, sports performance analysis has become an increasingly popular research field, particularly due to the growing demand for real-time monitoring of athletes' conditions in sports training and competitive events. Traditional methods of sports performance analysis typically rely on video data or sensor data for motion recognition. However, unimodal data often fails to fully capture the neural state of athletes, leading to limitations in accuracy and real-time performance when dealing with complex movement patterns. Moreover, these methods struggle with multimodal data fusion, making it difficult to fully leverage the deep information from electroencephalogram (EEG) signals.</p><p><strong>Methods: </strong>To address these challenges, this paper proposes a \"Cerebral Transformer\" model based on EEG signals and video data. By employing an adaptive attention mechanism and cross-modal fusion, the model effectively combines EEG signals and video streams to achieve precise recognition and analysis of athletes' movements. The model's effectiveness was validated through experiments on four datasets: SEED, DEAP, eSports Sensors, and MODA. The results show that the proposed model outperforms existing mainstream methods in terms of accuracy, recall, and F1 score, while also demonstrating high computational efficiency.</p><p><strong>Results and discussion: </strong>The significance of this study lies in providing a more comprehensive and efficient solution for sports performance analysis. Through cross-modal data fusion, it not only improves the accuracy of complex movement recognition but also provides technical support for monitoring athletes' neural states, offering important applications in sports training and medical rehabilitation.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1499734"},"PeriodicalIF":2.6,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11695414/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142931334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge-guided feature fusion network for RGB-T salient object detection. 边缘引导特征融合网络用于RGB-T显著目标检测。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-17 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1489658
Yuanlin Chen, Zengbao Sun, Cheng Yan, Ming Zhao

Introduction: RGB-T Salient Object Detection (SOD) aims to accurately segment salient regions in both visible light and thermal infrared images. However, many existing methods overlook the critical complementarity between these modalities, which can enhance detection accuracy.

Methods: We propose the Edge-Guided Feature Fusion Network (EGFF-Net), which consists of cross-modal feature extraction, edge-guided feature fusion, and salience map prediction. Firstly, the cross-modal feature extraction module captures and aggregates united and intersecting information in each local region of RGB and thermal images. Then, the edge-guided feature fusion module enhances the edge features of salient regions, considering that edge information is very helpful in refining significant area details. Moreover, a layer-by-layer decoding structure integrates multi-level features and generates the prediction of salience maps.

Results: We conduct extensive experiments on three benchmark datasets and compare EGFF-Net with state-of-the-art methods. Our approach achieves superior performance, demonstrating the effectiveness of the proposed modules in improving both detection accuracy and boundary refinement.

Discussion: The results highlight the importance of integrating cross-modal information and edge-guided fusion in RGB-T SOD. Our method outperforms existing techniques and provides a robust framework for future developments in multi-modal saliency detection.

RGB-T显著目标检测(SOD)旨在准确分割可见光和热红外图像中的显著区域。然而,许多现有的方法忽略了这些模式之间的关键互补性,这可以提高检测精度。方法:提出了一种边缘引导特征融合网络(EGFF-Net),该网络由跨模态特征提取、边缘引导特征融合和显著性地图预测组成。首先,跨模态特征提取模块对RGB图像和热图像各局部区域的统一和相交信息进行捕获和聚合;然后,考虑到边缘信息对提炼重要区域细节非常有帮助,边缘引导特征融合模块对显著区域的边缘特征进行增强;此外,一层一层的解码结构集成了多层次的特征,并产生显著性映射的预测。结果:我们在三个基准数据集上进行了广泛的实验,并将EGFF-Net与最先进的方法进行了比较。我们的方法取得了优异的性能,证明了所提出的模块在提高检测精度和边界细化方面的有效性。讨论:结果强调了整合跨模态信息和边缘引导融合在RGB-T SOD中的重要性。我们的方法优于现有的技术,并为未来多模态显著性检测的发展提供了一个强大的框架。
{"title":"Edge-guided feature fusion network for RGB-T salient object detection.","authors":"Yuanlin Chen, Zengbao Sun, Cheng Yan, Ming Zhao","doi":"10.3389/fnbot.2024.1489658","DOIUrl":"10.3389/fnbot.2024.1489658","url":null,"abstract":"<p><strong>Introduction: </strong>RGB-T Salient Object Detection (SOD) aims to accurately segment salient regions in both visible light and thermal infrared images. However, many existing methods overlook the critical complementarity between these modalities, which can enhance detection accuracy.</p><p><strong>Methods: </strong>We propose the Edge-Guided Feature Fusion Network (EGFF-Net), which consists of cross-modal feature extraction, edge-guided feature fusion, and salience map prediction. Firstly, the cross-modal feature extraction module captures and aggregates united and intersecting information in each local region of RGB and thermal images. Then, the edge-guided feature fusion module enhances the edge features of salient regions, considering that edge information is very helpful in refining significant area details. Moreover, a layer-by-layer decoding structure integrates multi-level features and generates the prediction of salience maps.</p><p><strong>Results: </strong>We conduct extensive experiments on three benchmark datasets and compare EGFF-Net with state-of-the-art methods. Our approach achieves superior performance, demonstrating the effectiveness of the proposed modules in improving both detection accuracy and boundary refinement.</p><p><strong>Discussion: </strong>The results highlight the importance of integrating cross-modal information and edge-guided fusion in RGB-T SOD. Our method outperforms existing techniques and provides a robust framework for future developments in multi-modal saliency detection.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1489658"},"PeriodicalIF":2.6,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11685216/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142914613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-attention swin-transformer for detailed segmentation of ancient architectural color patterns. 古建筑色彩图案精细分割的交叉关注旋转变压器。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-13 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1513488
Lv Yongyin, Yu Caixia

Introduction: Segmentation tasks in computer vision play a crucial role in various applications, ranging from object detection to medical imaging and cultural heritage preservation. Traditional approaches, including convolutional neural networks (CNNs) and standard transformer-based models, have achieved significant success; however, they often face challenges in capturing fine-grained details and maintaining efficiency across diverse datasets. These methods struggle with balancing precision and computational efficiency, especially when dealing with complex patterns and high-resolution images.

Methods: To address these limitations, we propose a novel segmentation model that integrates a hierarchical vision transformer backbone with multi-scale self-attention, cascaded attention decoding, and diffusion-based robustness enhancement. Our approach aims to capture both local details and global contexts effectively while maintaining lower computational overhead.

Results and discussion: Experiments conducted on four diverse datasets, including Ancient Architecture, MS COCO, Cityscapes, and ScanNet, demonstrate that our model outperforms state-of-the-art methods in accuracy, recall, and computational efficiency. The results highlight the model's ability to generalize well across different tasks and provide robust segmentation, even in challenging scenarios. Our work paves the way for more efficient and precise segmentation techniques, making it valuable for applications where both detail and speed are critical.

计算机视觉中的分割任务在各种应用中发挥着至关重要的作用,从物体检测到医学成像和文化遗产保护。传统的方法,包括卷积神经网络(cnn)和标准的基于变压器的模型,已经取得了显著的成功;然而,他们在捕获细粒度的细节和保持跨不同数据集的效率方面经常面临挑战。这些方法努力平衡精度和计算效率,特别是在处理复杂模式和高分辨率图像时。方法:为了解决这些局限性,我们提出了一种新的分割模型,该模型将分层视觉转换主干与多尺度自注意、级联注意解码和基于扩散的鲁棒性增强相结合。我们的方法旨在有效地捕获局部细节和全局上下文,同时保持较低的计算开销。结果和讨论:在四个不同的数据集上进行的实验,包括古建筑、MS COCO、城市景观和ScanNet,表明我们的模型在准确性、召回率和计算效率方面优于最先进的方法。结果表明,即使在具有挑战性的场景中,该模型也能很好地泛化不同任务,并提供稳健的分割。我们的工作为更有效和精确的分割技术铺平了道路,使其对细节和速度都至关重要的应用程序有价值。
{"title":"Cross-attention swin-transformer for detailed segmentation of ancient architectural color patterns.","authors":"Lv Yongyin, Yu Caixia","doi":"10.3389/fnbot.2024.1513488","DOIUrl":"10.3389/fnbot.2024.1513488","url":null,"abstract":"<p><strong>Introduction: </strong>Segmentation tasks in computer vision play a crucial role in various applications, ranging from object detection to medical imaging and cultural heritage preservation. Traditional approaches, including convolutional neural networks (CNNs) and standard transformer-based models, have achieved significant success; however, they often face challenges in capturing fine-grained details and maintaining efficiency across diverse datasets. These methods struggle with balancing precision and computational efficiency, especially when dealing with complex patterns and high-resolution images.</p><p><strong>Methods: </strong>To address these limitations, we propose a novel segmentation model that integrates a hierarchical vision transformer backbone with multi-scale self-attention, cascaded attention decoding, and diffusion-based robustness enhancement. Our approach aims to capture both local details and global contexts effectively while maintaining lower computational overhead.</p><p><strong>Results and discussion: </strong>Experiments conducted on four diverse datasets, including Ancient Architecture, MS COCO, Cityscapes, and ScanNet, demonstrate that our model outperforms state-of-the-art methods in accuracy, recall, and computational efficiency. The results highlight the model's ability to generalize well across different tasks and provide robust segmentation, even in challenging scenarios. Our work paves the way for more efficient and precise segmentation techniques, making it valuable for applications where both detail and speed are critical.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1513488"},"PeriodicalIF":2.6,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11707421/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142947470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D convolutional neural network based on spatial-spectral feature pictures learning for decoding motor imagery EEG signal. 基于空间频谱特征图学习的三维卷积神经网络在运动意象脑电信号解码中的应用。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-10 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1485640
Xiaoguang Li, Yaqi Chu, Xuejian Wu

Non-invasive brain-computer interfaces (BCI) hold great promise in the field of neurorehabilitation. They are easy to use and do not require surgery, particularly in the area of motor imagery electroencephalography (EEG). However, motor imagery EEG signals often have a low signal-to-noise ratio and limited spatial and temporal resolution. Traditional deep neural networks typically only focus on the spatial and temporal features of EEG, resulting in relatively low decoding and accuracy rates for motor imagery tasks. To address these challenges, this paper proposes a 3D Convolutional Neural Network (P-3DCNN) decoding method that jointly learns spatial-frequency feature maps from the frequency and spatial domains of the EEG signals. First, the Welch method is used to calculate the frequency band power spectrum of the EEG, and a 2D matrix representing the spatial topology distribution of the electrodes is constructed. These spatial-frequency representations are then generated through cubic interpolation of the temporal EEG data. Next, the paper designs a 3DCNN network with 1D and 2D convolutional layers in series to optimize the convolutional kernel parameters and effectively learn the spatial-frequency features of the EEG. Batch normalization and dropout are also applied to improve the training speed and classification performance of the network. Finally, through experiments, the proposed method is compared to various classic machine learning and deep learning techniques. The results show an average decoding accuracy rate of 86.69%, surpassing other advanced networks. This demonstrates the effectiveness of our approach in decoding motor imagery EEG and offers valuable insights for the development of BCI.

无创脑机接口(BCI)在神经康复领域具有广阔的应用前景。它们易于使用,不需要手术,特别是在运动图像脑电图(EEG)领域。然而,运动意象脑电信号通常具有低信噪比和有限的空间和时间分辨率。传统的深度神经网络通常只关注脑电图的空间和时间特征,导致运动图像任务的解码率和准确率相对较低。为了解决这些问题,本文提出了一种3D卷积神经网络(P-3DCNN)解码方法,该方法从脑电图信号的频率域和空间域共同学习空间-频率特征映射。首先,采用Welch方法计算EEG的频带功率谱,构造一个表示电极空间拓扑分布的二维矩阵;然后通过对时间脑电图数据的三次插值生成这些空间频率表示。接下来,设计了一维和二维卷积层串联的3DCNN网络,优化卷积核参数,有效学习脑电的空间频率特征。为了提高网络的训练速度和分类性能,还采用了批归一化和dropout方法。最后,通过实验,将该方法与各种经典的机器学习和深度学习技术进行了比较。结果表明,平均解码准确率为86.69%,超过其他先进网络。这证明了我们的方法在解码运动图像脑电图方面的有效性,并为脑机接口的发展提供了有价值的见解。
{"title":"3D convolutional neural network based on spatial-spectral feature pictures learning for decoding motor imagery EEG signal.","authors":"Xiaoguang Li, Yaqi Chu, Xuejian Wu","doi":"10.3389/fnbot.2024.1485640","DOIUrl":"10.3389/fnbot.2024.1485640","url":null,"abstract":"<p><p>Non-invasive brain-computer interfaces (BCI) hold great promise in the field of neurorehabilitation. They are easy to use and do not require surgery, particularly in the area of motor imagery electroencephalography (EEG). However, motor imagery EEG signals often have a low signal-to-noise ratio and limited spatial and temporal resolution. Traditional deep neural networks typically only focus on the spatial and temporal features of EEG, resulting in relatively low decoding and accuracy rates for motor imagery tasks. To address these challenges, this paper proposes a 3D Convolutional Neural Network (P-3DCNN) decoding method that jointly learns spatial-frequency feature maps from the frequency and spatial domains of the EEG signals. First, the Welch method is used to calculate the frequency band power spectrum of the EEG, and a 2D matrix representing the spatial topology distribution of the electrodes is constructed. These spatial-frequency representations are then generated through cubic interpolation of the temporal EEG data. Next, the paper designs a 3DCNN network with 1D and 2D convolutional layers in series to optimize the convolutional kernel parameters and effectively learn the spatial-frequency features of the EEG. Batch normalization and dropout are also applied to improve the training speed and classification performance of the network. Finally, through experiments, the proposed method is compared to various classic machine learning and deep learning techniques. The results show an average decoding accuracy rate of 86.69%, surpassing other advanced networks. This demonstrates the effectiveness of our approach in decoding motor imagery EEG and offers valuable insights for the development of BCI.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1485640"},"PeriodicalIF":2.6,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11667157/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142885708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An improved graph factorization machine based on solving unbalanced game perception. 一种基于求解不平衡博弈感知的改进图分解机。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-04 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1481297
Xiaoxia Xie, Yuan Jia, Tiande Ma

The user perception of mobile game is crucial for improving user experience and thus enhancing game profitability. The sparse data captured in the game can lead to sporadic performance of the model. This paper proposes a new method, the balanced graph factorization machine (BGFM), based on existing algorithms, considering the data imbalance and important high-dimensional features. The data categories are first balanced by Borderline-SMOTE oversampling, and then features are represented naturally in a graph-structured way. The highlight is that the BGFM contains interaction mechanisms for aggregating beneficial features. The results are represented as edges in the graph. Next, BGFM combines factorization machine (FM) and graph neural network strategies to concatenate any sequential feature interactions of features in the graph with an attention mechanism that assigns inter-feature weights. Experiments were conducted on the collected game perception dataset. The performance of proposed BGFM was compared with eight state-of-the-art models, significantly surpassing all of them by AUC, precision, recall, and F-measure indices.

用户对手机游戏的感知对于改善用户体验,从而提高游戏的盈利能力至关重要。游戏中捕获的稀疏数据可能导致模型的零星性能。本文在现有算法的基础上,考虑到数据的不平衡性和重要的高维特征,提出了一种新的算法——平衡图分解机(BGFM)。首先通过Borderline-SMOTE过采样平衡数据类别,然后以图结构的方式自然地表示特征。重点是BGFM包含聚合有益特性的交互机制。结果表示为图中的边。其次,BGFM结合因子分解机(FM)和图神经网络策略,通过分配特征间权重的注意机制将图中特征的任何顺序特征交互连接起来。在收集到的游戏感知数据集上进行实验。将所提出的BGFM与8种最先进的模型进行了比较,在AUC、精度、召回率和F-measure指标上显著优于所有模型。
{"title":"An improved graph factorization machine based on solving unbalanced game perception.","authors":"Xiaoxia Xie, Yuan Jia, Tiande Ma","doi":"10.3389/fnbot.2024.1481297","DOIUrl":"10.3389/fnbot.2024.1481297","url":null,"abstract":"<p><p>The user perception of mobile game is crucial for improving user experience and thus enhancing game profitability. The sparse data captured in the game can lead to sporadic performance of the model. This paper proposes a new method, the balanced graph factorization machine (BGFM), based on existing algorithms, considering the data imbalance and important high-dimensional features. The data categories are first balanced by Borderline-SMOTE oversampling, and then features are represented naturally in a graph-structured way. The highlight is that the BGFM contains interaction mechanisms for aggregating beneficial features. The results are represented as edges in the graph. Next, BGFM combines factorization machine (FM) and graph neural network strategies to concatenate any sequential feature interactions of features in the graph with an attention mechanism that assigns inter-feature weights. Experiments were conducted on the collected game perception dataset. The performance of proposed BGFM was compared with eight state-of-the-art models, significantly surpassing all of them by AUC, precision, recall, and F-measure indices.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1481297"},"PeriodicalIF":2.6,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11652536/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142853907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Neurorobotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1