首页 > 最新文献

Frontiers in Neurorobotics最新文献

英文 中文
Critical review on the relationship between design variables and performance of dexterous hands: a quantitative analysis. 对设计变量与灵巧手性能关系的批判性回顾:定量分析。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-30 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1513458
Lei Jiang, Chaojie Fu, Yanhong Liang, Yongbin Jin, Hongtao Wang

Dexterous hands play vital roles in tasks performed by humanoid robots. For the first time, we quantify the correlation between design variables and the performance of 65 dexterous hands using Cramér's V. Comprehensive cross-correlation analysis quantitatively reveals how the performance, such as speed, weight, fingertip force, and compactness are related to the design variables including degrees of freedom (DOF), structural form, driving form, and transmission mode. This study shows how various design parameters are coupled inherently, leading to compromise in performance metrics. These findings provide a theoretical basis for the design of dexterous hands in various application scenarios and offer new insights for performance optimization.

灵巧的手在类人机器人执行任务中起着至关重要的作用。本文首次采用cramsamr’s v方法量化了65只灵巧手的设计变量与灵巧手性能之间的关系。综合相互关联分析定量揭示了速度、重量、指尖力和紧致度等灵巧手性能与自由度、结构形式、驱动形式和传动方式等设计变量之间的关系。这项研究显示了各种设计参数是如何内在耦合的,从而导致性能指标的妥协。这些发现为各种应用场景下灵巧手的设计提供了理论依据,并为性能优化提供了新的见解。
{"title":"Critical review on the relationship between design variables and performance of dexterous hands: a quantitative analysis.","authors":"Lei Jiang, Chaojie Fu, Yanhong Liang, Yongbin Jin, Hongtao Wang","doi":"10.3389/fnbot.2024.1513458","DOIUrl":"10.3389/fnbot.2024.1513458","url":null,"abstract":"<p><p>Dexterous hands play vital roles in tasks performed by humanoid robots. For the first time, we quantify the correlation between design variables and the performance of 65 dexterous hands using Cramér's V. Comprehensive cross-correlation analysis quantitatively reveals how the performance, such as speed, weight, fingertip force, and compactness are related to the design variables including degrees of freedom (DOF), structural form, driving form, and transmission mode. This study shows how various design parameters are coupled inherently, leading to compromise in performance metrics. These findings provide a theoretical basis for the design of dexterous hands in various application scenarios and offer new insights for performance optimization.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1513458"},"PeriodicalIF":2.6,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11821616/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143413708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LoCS-Net: Localizing convolutional spiking neural network for fast visual place recognition. LoCS-Net:用于快速视觉位置识别的局部卷积脉冲神经网络。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-29 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1490267
Ugur Akcal, Ivan Georgiev Raikov, Ekaterina Dmitrievna Gribkova, Anwesa Choudhuri, Seung Hyun Kim, Mattia Gazzola, Rhanor Gillette, Ivan Soltesz, Girish Chowdhary

Visual place recognition (VPR) is the ability to recognize locations in a physical environment based only on visual inputs. It is a challenging task due to perceptual aliasing, viewpoint and appearance variations and complexity of dynamic scenes. Despite promising demonstrations, many state-of-the-art (SOTA) VPR approaches based on artificial neural networks (ANNs) suffer from computational inefficiency. However, spiking neural networks (SNNs) implemented on neuromorphic hardware are reported to have remarkable potential for more efficient solutions computationally. Still, training SOTA SNNs for VPR is often intractable on large and diverse datasets, and they typically demonstrate poor real-time operation performance. To address these shortcomings, we developed an end-to-end convolutional SNN model for VPR that leverages backpropagation for tractable training. Rate-based approximations of leaky integrate-and-fire (LIF) neurons are employed during training, which are then replaced with spiking LIF neurons during inference. The proposed method significantly outperforms existing SOTA SNNs on challenging datasets like Nordland and Oxford RobotCar, achieving 78.6% precision at 100% recall on the Nordland dataset (compared to 73.0% from the current SOTA) and 45.7% on the Oxford RobotCar dataset (compared to 20.2% from the current SOTA). Our approach offers a simpler training pipeline while yielding significant improvements in both training and inference times compared to SOTA SNNs for VPR. Hardware-in-the-loop tests using Intel's neuromorphic USB form factor, Kapoho Bay, show that our on-chip spiking models for VPR trained via the ANN-to-SNN conversion strategy continue to outperform their SNN counterparts, despite a slight but noticeable decrease in performance when transitioning from off-chip to on-chip, while offering significant energy efficiency. The results highlight the outstanding rapid prototyping and real-world deployment capabilities of this approach, showing it to be a substantial step toward more prevalent SNN-based real-world robotics solutions.

视觉位置识别(VPR)是一种仅根据视觉输入识别物理环境中位置的能力。由于感知混叠、视点和外观的变化以及动态场景的复杂性,这是一项具有挑战性的任务。尽管有很好的证明,但许多基于人工神经网络(ann)的最先进(SOTA) VPR方法存在计算效率低下的问题。然而,据报道,在神经形态硬件上实现的峰值神经网络(snn)在计算上具有更有效的解决方案的显着潜力。尽管如此,在大型和多样化的数据集上训练SOTA snn用于VPR通常是棘手的,并且它们通常表现出较差的实时操作性能。为了解决这些缺点,我们为VPR开发了一个端到端卷积SNN模型,该模型利用反向传播进行可处理的训练。在训练过程中使用基于速率的泄漏集成-点火(LIF)神经元近似,然后在推理过程中用峰值LIF神经元代替。所提出的方法在具有挑战性的数据集(如Nordland和Oxford RobotCar)上显著优于现有的SOTA snn,在100%召回率下,在Nordland数据集上实现了78.6%的精度(与当前SOTA的73.0%相比),在Oxford RobotCar数据集上实现了45.7%的精度(与当前SOTA的20.2%相比)。我们的方法提供了一个更简单的训练管道,同时与SOTA snn的VPR相比,在训练和推理时间上都有了显著的改进。使用英特尔的神经形态USB外形因子Kapoho Bay进行的硬件在环测试表明,通过ann到SNN转换策略训练的VPR片上峰值模型继续优于SNN同类模型,尽管从片外转换到片上时性能略有但明显下降,同时提供显著的能源效率。结果突出了这种方法的出色的快速原型和实际部署能力,表明它是朝着更普遍的基于snn的实际机器人解决方案迈出的重要一步。
{"title":"LoCS-Net: Localizing convolutional spiking neural network for fast visual place recognition.","authors":"Ugur Akcal, Ivan Georgiev Raikov, Ekaterina Dmitrievna Gribkova, Anwesa Choudhuri, Seung Hyun Kim, Mattia Gazzola, Rhanor Gillette, Ivan Soltesz, Girish Chowdhary","doi":"10.3389/fnbot.2024.1490267","DOIUrl":"10.3389/fnbot.2024.1490267","url":null,"abstract":"<p><p>Visual place recognition (VPR) is the ability to recognize locations in a physical environment based only on visual inputs. It is a challenging task due to perceptual aliasing, viewpoint and appearance variations and complexity of dynamic scenes. Despite promising demonstrations, many state-of-the-art (SOTA) VPR approaches based on artificial neural networks (ANNs) suffer from computational inefficiency. However, spiking neural networks (SNNs) implemented on neuromorphic hardware are reported to have remarkable potential for more efficient solutions computationally. Still, training SOTA SNNs for VPR is often intractable on large and diverse datasets, and they typically demonstrate poor real-time operation performance. To address these shortcomings, we developed an end-to-end convolutional SNN model for VPR that leverages backpropagation for tractable training. Rate-based approximations of leaky integrate-and-fire (LIF) neurons are employed during training, which are then replaced with spiking LIF neurons during inference. The proposed method significantly outperforms existing SOTA SNNs on challenging datasets like Nordland and Oxford RobotCar, achieving 78.6% precision at 100% recall on the Nordland dataset (compared to 73.0% from the current SOTA) and 45.7% on the Oxford RobotCar dataset (compared to 20.2% from the current SOTA). Our approach offers a simpler training pipeline while yielding significant improvements in both training and inference times compared to SOTA SNNs for VPR. Hardware-in-the-loop tests using Intel's neuromorphic USB form factor, Kapoho Bay, show that our on-chip spiking models for VPR trained via the ANN-to-SNN conversion strategy continue to outperform their SNN counterparts, despite a slight but noticeable decrease in performance when transitioning from off-chip to on-chip, while offering significant energy efficiency. The results highlight the outstanding rapid prototyping and real-world deployment capabilities of this approach, showing it to be a substantial step toward more prevalent SNN-based real-world robotics solutions.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1490267"},"PeriodicalIF":2.6,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11813887/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143407057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-preserving ADP for secure tracking control of AVRs against unreliable communication. 保护隐私的ADP,用于自动驾驶汽车的安全跟踪控制,防止不可靠的通信。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-29 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1549414
Kun Zhang, Kezhen Han, Zhijian Hu, Guoqiang Tan

In this study, we developed an encrypted guaranteed-cost tracking control scheme for autonomous vehicles or robots (AVRs), by using the adaptive dynamic programming technique. To construct the tracking dynamics under unreliable communication, the AVR's motion is analyzed. To mitigate information leakage and unauthorized access in vehicular network systems, an encrypted guaranteed-cost policy iteration algorithm is developed, incorporating encryption and decryption schemes between the vehicle and the cloud based on the tracking dynamics. Building on a simplified single-network framework, the Hamilton-Jacobi-Bellman equation is approximately solved, avoiding the complexity of dual-network structures and reducing the computational costs. The input-constrained issue is successfully handled using a non-quadratic value function. Furthermore, the approximate optimal control is verified to stabilize the tracking system. A case study involving an AVR system validates the effectiveness and practicality of the proposed algorithm.

在本研究中,我们利用自适应动态规划技术开发了一种自动驾驶汽车或机器人(avr)的加密保证成本跟踪控制方案。为了构造不可靠通信条件下的跟踪动力学,对AVR的运动进行了分析。为了减少车联网系统中的信息泄露和未经授权的访问,提出了一种加密保证成本策略迭代算法,该算法结合了车辆与云之间基于跟踪动态的加解密方案。建立在简化的单网络框架上,近似求解Hamilton-Jacobi-Bellman方程,避免了双网络结构的复杂性,降低了计算成本。使用非二次值函数成功地处理了输入受限问题。进一步验证了近似最优控制对跟踪系统的稳定性。以AVR系统为例,验证了该算法的有效性和实用性。
{"title":"Privacy-preserving ADP for secure tracking control of AVRs against unreliable communication.","authors":"Kun Zhang, Kezhen Han, Zhijian Hu, Guoqiang Tan","doi":"10.3389/fnbot.2025.1549414","DOIUrl":"10.3389/fnbot.2025.1549414","url":null,"abstract":"<p><p>In this study, we developed an encrypted guaranteed-cost tracking control scheme for autonomous vehicles or robots (AVRs), by using the adaptive dynamic programming technique. To construct the tracking dynamics under unreliable communication, the AVR's motion is analyzed. To mitigate information leakage and unauthorized access in vehicular network systems, an encrypted guaranteed-cost policy iteration algorithm is developed, incorporating encryption and decryption schemes between the vehicle and the cloud based on the tracking dynamics. Building on a simplified single-network framework, the Hamilton-Jacobi-Bellman equation is approximately solved, avoiding the complexity of dual-network structures and reducing the computational costs. The input-constrained issue is successfully handled using a non-quadratic value function. Furthermore, the approximate optimal control is verified to stabilize the tracking system. A case study involving an AVR system validates the effectiveness and practicality of the proposed algorithm.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1549414"},"PeriodicalIF":2.6,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11813875/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143407034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NavBLIP: a visual-language model for enhancing unmanned aerial vehicles navigation and object detection. NavBLIP:用于增强无人机导航和目标检测的视觉语言模型。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-24 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1513354
Ye Li, Li Yang, Meifang Yang, Fei Yan, Tonghua Liu, Chensi Guo, Rufeng Chen

Introduction: In recent years, Unmanned Aerial Vehicles (UAVs) have increasingly been deployed in various applications such as autonomous navigation, surveillance, and object detection. Traditional methods for UAV navigation and object detection have often relied on either handcrafted features or unimodal deep learning approaches. While these methods have seen some success, they frequently encounter limitations in dynamic environments, where robustness and computational efficiency become critical for real-time performance. Additionally, these methods often fail to effectively integrate multimodal inputs, which restricts their adaptability and generalization capabilities when facing complex and diverse scenarios.

Methods: To address these challenges, we introduce NavBLIP, a novel visual-language model specifically designed to enhance UAV navigation and object detection by utilizing multimodal data. NavBLIP incorporates transfer learning techniques along with a Nuisance-Invariant Multimodal Feature Extraction (NIMFE) module. The NIMFE module plays a key role in disentangling relevant features from intricate visual and environmental inputs, allowing UAVs to swiftly adapt to new environments and improve object detection accuracy. Furthermore, NavBLIP employs a multimodal control strategy that dynamically selects context-specific features to optimize real-time performance, ensuring efficiency in high-stakes operations.

Results and discussion: Extensive experiments on benchmark datasets such as RefCOCO, CC12M, and Openlmages reveal that NavBLIP outperforms existing state-of-the-art models in terms of accuracy, recall, and computational efficiency. Additionally, our ablation study emphasizes the significance of the NIMFE and transfer learning components in boosting the model's performance, underscoring NavBLIP's potential for real-time UAV applications where adaptability and computational efficiency are paramount.

近年来,无人驾驶飞行器(uav)越来越多地应用于自主导航、监视和目标检测等各种应用中。无人机导航和目标检测的传统方法通常依赖于手工制作的特征或单模深度学习方法。虽然这些方法取得了一些成功,但它们在动态环境中经常遇到限制,在动态环境中,鲁棒性和计算效率对实时性能至关重要。此外,这些方法往往不能有效地整合多模态输入,这限制了它们在面对复杂多样场景时的适应性和泛化能力。为了应对这些挑战,我们引入了NavBLIP,这是一种新的视觉语言模型,专门用于利用多模态数据增强无人机导航和目标检测。NavBLIP结合了迁移学习技术以及不干扰多模态特征提取(NIMFE)模块。NIMFE模块在从复杂的视觉和环境输入中分离相关特征方面发挥着关键作用,使无人机能够快速适应新环境并提高目标检测精度。此外,NavBLIP采用多模态控制策略,动态选择特定于环境的特征来优化实时性能,确保高风险作业的效率。结果和讨论:在RefCOCO、CC12M和openimages等基准数据集上进行的大量实验表明,NavBLIP在准确性、召回率和计算效率方面优于现有的最先进模型。此外,我们的烧蚀研究强调了NIMFE和迁移学习组件在提高模型性能方面的重要性,强调了NavBLIP在实时无人机应用中的潜力,其中适应性和计算效率至关重要。
{"title":"NavBLIP: a visual-language model for enhancing unmanned aerial vehicles navigation and object detection.","authors":"Ye Li, Li Yang, Meifang Yang, Fei Yan, Tonghua Liu, Chensi Guo, Rufeng Chen","doi":"10.3389/fnbot.2024.1513354","DOIUrl":"10.3389/fnbot.2024.1513354","url":null,"abstract":"<p><strong>Introduction: </strong>In recent years, Unmanned Aerial Vehicles (UAVs) have increasingly been deployed in various applications such as autonomous navigation, surveillance, and object detection. Traditional methods for UAV navigation and object detection have often relied on either handcrafted features or unimodal deep learning approaches. While these methods have seen some success, they frequently encounter limitations in dynamic environments, where robustness and computational efficiency become critical for real-time performance. Additionally, these methods often fail to effectively integrate multimodal inputs, which restricts their adaptability and generalization capabilities when facing complex and diverse scenarios.</p><p><strong>Methods: </strong>To address these challenges, we introduce NavBLIP, a novel visual-language model specifically designed to enhance UAV navigation and object detection by utilizing multimodal data. NavBLIP incorporates transfer learning techniques along with a Nuisance-Invariant Multimodal Feature Extraction (NIMFE) module. The NIMFE module plays a key role in disentangling relevant features from intricate visual and environmental inputs, allowing UAVs to swiftly adapt to new environments and improve object detection accuracy. Furthermore, NavBLIP employs a multimodal control strategy that dynamically selects context-specific features to optimize real-time performance, ensuring efficiency in high-stakes operations.</p><p><strong>Results and discussion: </strong>Extensive experiments on benchmark datasets such as RefCOCO, CC12M, and Openlmages reveal that NavBLIP outperforms existing state-of-the-art models in terms of accuracy, recall, and computational efficiency. Additionally, our ablation study emphasizes the significance of the NIMFE and transfer learning components in boosting the model's performance, underscoring NavBLIP's potential for real-time UAV applications where adaptability and computational efficiency are paramount.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1513354"},"PeriodicalIF":2.6,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11802496/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143382200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain-inspired multimodal motion and fine-grained action recognition. 大脑启发的多模态运动和细粒度动作识别。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-24 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1502071
Yuening Li, Xiuhua Yang, Changkui Chen

Introduction: Traditional action recognition methods predominantly rely on a single modality, such as vision or motion, which presents significant limitations when dealing with fine-grained action recognition. These methods struggle particularly with video data containing complex combinations of actions and subtle motion variations.

Methods: Typically, they depend on handcrafted feature extractors or simple convolutional neural network (CNN) architectures, which makes effective multimodal fusion challenging. This study introduces a novel architecture called FGM-CLIP (Fine-Grained Motion CLIP) to enhance fine-grained action recognition. FGM-CLIP leverages the powerful capabilities of Contrastive Language-Image Pretraining (CLIP), integrating a fine-grained motion encoder and a multimodal fusion layer to achieve precise end-to-end action recognition. By jointly optimizing visual and motion features, the model captures subtle action variations, resulting in higher classification accuracy in complex video data.

Results and discussion: Experimental results demonstrate that FGM-CLIP significantly outperforms existing methods on multiple fine-grained action recognition datasets. Its multimodal fusion strategy notably improves the model's robustness and accuracy, particularly for videos with intricate action patterns.

传统的动作识别方法主要依赖于单一的模态,如视觉或运动,这在处理细粒度的动作识别时存在很大的局限性。这些方法尤其难以处理包含复杂动作组合和微妙动作变化的视频数据。方法:通常,它们依赖于手工制作的特征提取器或简单的卷积神经网络(CNN)架构,这使得有效的多模态融合具有挑战性。本研究引入了一种名为FGM-CLIP(细粒度运动CLIP)的新架构来增强细粒度动作识别。FGM-CLIP利用对比语言图像预训练(CLIP)的强大功能,集成了细粒度运动编码器和多模态融合层,以实现精确的端到端动作识别。通过对视觉和动作特征的联合优化,该模型能够捕捉细微的动作变化,从而在复杂的视频数据中获得更高的分类精度。结果和讨论:实验结果表明,在多个细粒度动作识别数据集上,FGM-CLIP显著优于现有方法。它的多模态融合策略显著提高了模型的鲁棒性和准确性,特别是对于具有复杂动作模式的视频。
{"title":"Brain-inspired multimodal motion and fine-grained action recognition.","authors":"Yuening Li, Xiuhua Yang, Changkui Chen","doi":"10.3389/fnbot.2024.1502071","DOIUrl":"10.3389/fnbot.2024.1502071","url":null,"abstract":"<p><strong>Introduction: </strong>Traditional action recognition methods predominantly rely on a single modality, such as vision or motion, which presents significant limitations when dealing with fine-grained action recognition. These methods struggle particularly with video data containing complex combinations of actions and subtle motion variations.</p><p><strong>Methods: </strong>Typically, they depend on handcrafted feature extractors or simple convolutional neural network (CNN) architectures, which makes effective multimodal fusion challenging. This study introduces a novel architecture called FGM-CLIP (Fine-Grained Motion CLIP) to enhance fine-grained action recognition. FGM-CLIP leverages the powerful capabilities of Contrastive Language-Image Pretraining (CLIP), integrating a fine-grained motion encoder and a multimodal fusion layer to achieve precise end-to-end action recognition. By jointly optimizing visual and motion features, the model captures subtle action variations, resulting in higher classification accuracy in complex video data.</p><p><strong>Results and discussion: </strong>Experimental results demonstrate that FGM-CLIP significantly outperforms existing methods on multiple fine-grained action recognition datasets. Its multimodal fusion strategy notably improves the model's robustness and accuracy, particularly for videos with intricate action patterns.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1502071"},"PeriodicalIF":2.6,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11802800/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143382178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformer-based short-term traffic forecasting model considering traffic spatiotemporal correlation. 考虑交通时空相关性的基于变压器的短期交通预测模型。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-23 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1527908
Ande Chang, Yuting Ji, Yiming Bie

Traffic forecasting is crucial for a variety of applications, including route optimization, signal management, and travel time estimation. However, many existing prediction models struggle to accurately capture the spatiotemporal patterns in traffic data due to its inherent nonlinearity, high dimensionality, and complex dependencies. To address these challenges, a short-term traffic forecasting model, Trafficformer, is proposed based on the Transformer framework. The model first uses a multilayer perceptron to extract features from historical traffic data, then enhances spatial interactions through Transformer-based encoding. By incorporating road network topology, a spatial mask filters out noise and irrelevant interactions, improving prediction accuracy. Finally, traffic speed is predicted using another multilayer perceptron. In the experiments, Trafficformer is evaluated on the Seattle Loop Detector dataset. It is compared with six baseline methods, with Mean Absolute Error, Mean Absolute Percentage Error, and Root Mean Square Error used as metrics. The results show that Trafficformer not only has higher prediction accuracy, but also can effectively identify key sections, and has great potential in intelligent traffic control optimization and refined traffic resource allocation.

交通预测是各种应用的关键,包括路线优化,信号管理和旅行时间估计。然而,许多现有的预测模型由于其固有的非线性、高维性和复杂的依赖关系而难以准确地捕捉交通数据中的时空模式。为了应对这些挑战,提出了一个基于Transformer框架的短期流量预测模型Trafficformer。该模型首先使用多层感知器从历史交通数据中提取特征,然后通过基于transformer的编码增强空间交互。通过结合路网拓扑结构,空间掩模滤除噪声和不相关的相互作用,提高预测精度。最后,使用另一个多层感知器预测交通速度。在实验中,Trafficformer在西雅图环路检测器数据集上进行了评估。将其与六种基线方法进行比较,以平均绝对误差、平均绝对百分比误差和均方根误差作为度量标准。结果表明,Trafficformer不仅具有较高的预测精度,而且能够有效识别关键路段,在智能交通控制优化和交通资源精细化配置方面具有很大的潜力。
{"title":"Transformer-based short-term traffic forecasting model considering traffic spatiotemporal correlation.","authors":"Ande Chang, Yuting Ji, Yiming Bie","doi":"10.3389/fnbot.2025.1527908","DOIUrl":"10.3389/fnbot.2025.1527908","url":null,"abstract":"<p><p>Traffic forecasting is crucial for a variety of applications, including route optimization, signal management, and travel time estimation. However, many existing prediction models struggle to accurately capture the spatiotemporal patterns in traffic data due to its inherent nonlinearity, high dimensionality, and complex dependencies. To address these challenges, a short-term traffic forecasting model, Trafficformer, is proposed based on the Transformer framework. The model first uses a multilayer perceptron to extract features from historical traffic data, then enhances spatial interactions through Transformer-based encoding. By incorporating road network topology, a spatial mask filters out noise and irrelevant interactions, improving prediction accuracy. Finally, traffic speed is predicted using another multilayer perceptron. In the experiments, Trafficformer is evaluated on the Seattle Loop Detector dataset. It is compared with six baseline methods, with Mean Absolute Error, Mean Absolute Percentage Error, and Root Mean Square Error used as metrics. The results show that Trafficformer not only has higher prediction accuracy, but also can effectively identify key sections, and has great potential in intelligent traffic control optimization and refined traffic resource allocation.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1527908"},"PeriodicalIF":2.6,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11799296/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143364427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AMEEGNet: attention-based multiscale EEGNet for effective motor imagery EEG decoding. AMEEGNet:基于注意的多尺度运动意象脑电解码方法。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-22 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1540033
Xuejian Wu, Yaqi Chu, Qing Li, Yang Luo, Yiwen Zhao, Xingang Zhao

Recently, electroencephalogram (EEG) based on motor imagery (MI) have gained significant traction in brain-computer interface (BCI) technology, particularly for the rehabilitation of paralyzed patients. But the low signal-to-noise ratio of MI EEG makes it difficult to decode effectively and hinders the development of BCI. In this paper, a method of attention-based multiscale EEGNet (AMEEGNet) was proposed to improve the decoding performance of MI-EEG. First, three parallel EEGNets with fusion transmission method were employed to extract the high-quality temporal-spatial feature of EEG data from multiple scales. Then, the efficient channel attention (ECA) module enhances the acquisition of more discriminative spatial features through a lightweight approach that weights critical channels. The experimental results demonstrated that the proposed model achieves decoding accuracies of 81.17, 89.83, and 95.49% on BCI-2a, 2b and HGD datasets. The results show that the proposed AMEEGNet effectively decodes temporal-spatial features, providing a novel perspective on MI-EEG decoding and advancing future BCI applications.

近年来,基于运动图像(MI)的脑电图(EEG)在脑机接口(BCI)技术中得到了很大的关注,特别是在瘫痪患者的康复中。但由于脑电信号的低信噪比,难以有效解码,阻碍了脑机接口的发展。本文提出了一种基于注意力的多尺度脑电网络(AMEEGNet)方法,以提高MI-EEG的解码性能。首先,采用融合传输方法,利用3个并行的脑电信号网络,从多个尺度提取高质量的脑电信号时空特征;然后,有效信道注意(ECA)模块通过对关键信道进行加权的轻量级方法,增强对更具判别性的空间特征的获取。实验结果表明,该模型在BCI-2a、2b和HGD数据集上的解码准确率分别为81.17、89.83和95.49%。结果表明,所提出的AMEEGNet可以有效地解码时空特征,为脑机接口(BCI)的解码提供了新的视角,为未来的脑机接口应用提供了新的思路。
{"title":"AMEEGNet: attention-based multiscale EEGNet for effective motor imagery EEG decoding.","authors":"Xuejian Wu, Yaqi Chu, Qing Li, Yang Luo, Yiwen Zhao, Xingang Zhao","doi":"10.3389/fnbot.2025.1540033","DOIUrl":"10.3389/fnbot.2025.1540033","url":null,"abstract":"<p><p>Recently, electroencephalogram (EEG) based on motor imagery (MI) have gained significant traction in brain-computer interface (BCI) technology, particularly for the rehabilitation of paralyzed patients. But the low signal-to-noise ratio of MI EEG makes it difficult to decode effectively and hinders the development of BCI. In this paper, a method of attention-based multiscale EEGNet (AMEEGNet) was proposed to improve the decoding performance of MI-EEG. First, three parallel EEGNets with fusion transmission method were employed to extract the high-quality temporal-spatial feature of EEG data from multiple scales. Then, the efficient channel attention (ECA) module enhances the acquisition of more discriminative spatial features through a lightweight approach that weights critical channels. The experimental results demonstrated that the proposed model achieves decoding accuracies of 81.17, 89.83, and 95.49% on BCI-2a, 2b and HGD datasets. The results show that the proposed AMEEGNet effectively decodes temporal-spatial features, providing a novel perspective on MI-EEG decoding and advancing future BCI applications.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1540033"},"PeriodicalIF":2.6,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11794809/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143254282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Convolutional Networks for multi-modal robotic martial arts leg pose recognition. 多模态机器人武术腿部姿势识别的图卷积网络。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-20 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1520983
Shun Yao, Yihan Ping, Xiaoyu Yue, He Chen

Introduction: Accurate recognition of martial arts leg poses is essential for applications in sports analytics, rehabilitation, and human-computer interaction. Traditional pose recognition models, relying on sequential or convolutional approaches, often struggle to capture the complex spatial-temporal dependencies inherent in martial arts movements. These methods lack the ability to effectively model the nuanced dynamics of joint interactions and temporal progression, leading to limited generalization in recognizing complex actions.

Methods: To address these challenges, we propose PoseGCN, a Graph Convolutional Network (GCN)-based model that integrates spatial, temporal, and contextual features through a novel framework. PoseGCN leverages spatial-temporal graph encoding to capture joint motion dynamics, an action-specific attention mechanism to assign importance to relevant joints depending on the action context, and a self-supervised pretext task to enhance temporal robustness and continuity. Experimental results on four benchmark datasets-Kinetics-700, Human3.6M, NTU RGB+D, and UTD-MHAD-demonstrate that PoseGCN outperforms existing models, achieving state-of-the-art accuracy and F1 scores.

Results and discussion: These findings highlight the model's capacity to generalize across diverse datasets and capture fine-grained pose details, showcasing its potential in advancing complex pose recognition tasks. The proposed framework offers a robust solution for precise action recognition and paves the way for future developments in multi-modal pose analysis.

简介准确识别武术腿部姿势对于体育分析、康复和人机交互等应用至关重要。传统的姿势识别模型依赖于序列或卷积方法,往往难以捕捉武术动作中固有的复杂时空依赖关系。这些方法无法有效模拟关节互动和时间进展的微妙动态,导致识别复杂动作的通用性有限:为了应对这些挑战,我们提出了基于图卷积网络(Graph Convolutional Network,GCN)的模型 PoseGCN,该模型通过一个新颖的框架整合了空间、时间和上下文特征。PoseGCN 利用时空图编码捕捉关节运动动态,利用特定动作关注机制根据动作上下文为相关关节分配重要性,并利用自监督借口任务增强时间鲁棒性和连续性。在四个基准数据集--Kinetics-700、Human3.6M、NTU RGB+D 和 UTD-MHAD 上的实验结果表明,PoseGCN 优于现有模型,达到了最先进的准确率和 F1 分数:这些发现凸显了该模型在不同数据集上的泛化能力和捕捉细粒度姿势细节的能力,展示了它在推进复杂姿势识别任务方面的潜力。所提出的框架为精确的动作识别提供了强大的解决方案,并为多模态姿势分析的未来发展铺平了道路。
{"title":"Graph Convolutional Networks for multi-modal robotic martial arts leg pose recognition.","authors":"Shun Yao, Yihan Ping, Xiaoyu Yue, He Chen","doi":"10.3389/fnbot.2024.1520983","DOIUrl":"10.3389/fnbot.2024.1520983","url":null,"abstract":"<p><strong>Introduction: </strong>Accurate recognition of martial arts leg poses is essential for applications in sports analytics, rehabilitation, and human-computer interaction. Traditional pose recognition models, relying on sequential or convolutional approaches, often struggle to capture the complex spatial-temporal dependencies inherent in martial arts movements. These methods lack the ability to effectively model the nuanced dynamics of joint interactions and temporal progression, leading to limited generalization in recognizing complex actions.</p><p><strong>Methods: </strong>To address these challenges, we propose PoseGCN, a Graph Convolutional Network (GCN)-based model that integrates spatial, temporal, and contextual features through a novel framework. PoseGCN leverages spatial-temporal graph encoding to capture joint motion dynamics, an action-specific attention mechanism to assign importance to relevant joints depending on the action context, and a self-supervised pretext task to enhance temporal robustness and continuity. Experimental results on four benchmark datasets-Kinetics-700, Human3.6M, NTU RGB+D, and UTD-MHAD-demonstrate that PoseGCN outperforms existing models, achieving state-of-the-art accuracy and F1 scores.</p><p><strong>Results and discussion: </strong>These findings highlight the model's capacity to generalize across diverse datasets and capture fine-grained pose details, showcasing its potential in advancing complex pose recognition tasks. The proposed framework offers a robust solution for precise action recognition and paves the way for future developments in multi-modal pose analysis.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1520983"},"PeriodicalIF":2.6,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11792168/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143188921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved object detection method for autonomous driving based on DETR. 基于 DETR 的改进型自动驾驶物体检测方法。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-20 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1484276
Huaqi Zhao, Songnan Zhang, Xiang Peng, Zhengguang Lu, Guojing Li

Object detection is a critical component in the development of autonomous driving technology and has demonstrated significant growth potential. To address the limitations of current techniques, this paper presents an improved object detection method for autonomous driving based on a detection transformer (DETR). First, we introduce a multi-scale feature and location information extraction method, which solves the inadequacy of the model for multi-scale object localization and detection. In addition, we developed a transformer encoder based on the group axial attention mechanism. This allows for efficient attention range control in the horizontal and vertical directions while reducing computation, ultimately enhancing the inference speed. Furthermore, we propose a novel dynamic hyperparameter tuning training method based on Pareto efficiency, which coordinates the training state of the loss functions through dynamic weights, overcoming issues associated with manually setting fixed weights and enhancing model convergence speed and accuracy. Experimental results demonstrate that the proposed method surpasses others, with improvements of 3.3%, 4.5%, and 3% in average precision on the COCO, PASCAL VOC, and KITTI datasets, respectively, and an 84% increase in FPS.

目标检测是自动驾驶技术发展的关键组成部分,已经显示出巨大的增长潜力。针对现有技术的局限性,提出了一种改进的基于检测变压器(DETR)的自动驾驶目标检测方法。首先,我们引入了一种多尺度特征和位置信息提取方法,解决了模型在多尺度目标定位和检测方面的不足。此外,我们还开发了一种基于群体轴向注意机制的变压器编码器。这允许在水平和垂直方向上有效地控制注意力范围,同时减少计算,最终提高推理速度。此外,我们提出了一种基于Pareto效率的动态超参数整定训练方法,该方法通过动态权值来协调损失函数的训练状态,克服了手动设置固定权值的问题,提高了模型的收敛速度和精度。实验结果表明,该方法在COCO、PASCAL VOC和KITTI数据集上的平均精度分别提高了3.3%、4.5%和3%,FPS提高了84%。
{"title":"Improved object detection method for autonomous driving based on DETR.","authors":"Huaqi Zhao, Songnan Zhang, Xiang Peng, Zhengguang Lu, Guojing Li","doi":"10.3389/fnbot.2024.1484276","DOIUrl":"10.3389/fnbot.2024.1484276","url":null,"abstract":"<p><p>Object detection is a critical component in the development of autonomous driving technology and has demonstrated significant growth potential. To address the limitations of current techniques, this paper presents an improved object detection method for autonomous driving based on a detection transformer (DETR). First, we introduce a multi-scale feature and location information extraction method, which solves the inadequacy of the model for multi-scale object localization and detection. In addition, we developed a transformer encoder based on the group axial attention mechanism. This allows for efficient attention range control in the horizontal and vertical directions while reducing computation, ultimately enhancing the inference speed. Furthermore, we propose a novel dynamic hyperparameter tuning training method based on Pareto efficiency, which coordinates the training state of the loss functions through dynamic weights, overcoming issues associated with manually setting fixed weights and enhancing model convergence speed and accuracy. Experimental results demonstrate that the proposed method surpasses others, with improvements of 3.3%, 4.5%, and 3% in average precision on the COCO, PASCAL VOC, and KITTI datasets, respectively, and an 84% increase in FPS.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1484276"},"PeriodicalIF":2.6,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11788285/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143122709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-modality fusion with EEG and text for enhanced emotion detection in English writing. 脑电与文本交叉情态融合在英语写作情感检测中的应用。
IF 2.6 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-17 eCollection Date: 2024-01-01 DOI: 10.3389/fnbot.2024.1529880
Jing Wang, Ci Zhang

Introduction: Emotion detection in written text is critical for applications in human-computer interaction, affective computing, and personalized content recommendation. Traditional approaches to emotion detection primarily leverage textual features, using natural language processing techniques such as sentiment analysis, which, while effective, may miss subtle nuances of emotions. These methods often fall short in recognizing the complex, multimodal nature of human emotions, as they ignore physiological cues that could provide richer emotional insights.

Methods: To address these limitations, this paper proposes Emotion Fusion-Transformer, a cross-modality fusion model that integrates EEG signals and textual data to enhance emotion detection in English writing. By utilizing the Transformer architecture, our model effectively captures contextual relationships within the text while concurrently processing EEG signals to extract underlying emotional states. Specifically, the Emotion Fusion-Transformer first preprocesses EEG data through signal transformation and filtering, followed by feature extraction that complements the textual embeddings. These modalities are fused within a unified Transformer framework, allowing for a holistic view of both the cognitive and physiological dimensions of emotion.

Results and discussion: Experimental results demonstrate that the proposed model significantly outperforms text-only and EEG-only approaches, with improvements in both accuracy and F1-score across diverse emotional categories. This model shows promise for enhancing affective computing applications by bridging the gap between physiological and textual emotion detection, enabling more nuanced and accurate emotion analysis in English writing.

简介:书面文本中的情感检测对于人机交互、情感计算和个性化内容推荐的应用至关重要。传统的情感检测方法主要利用文本特征,使用自然语言处理技术,如情感分析,虽然有效,但可能会错过情感的细微差别。这些方法往往无法识别人类情感的复杂性和多模态本质,因为它们忽略了可以提供更丰富情感见解的生理线索。方法:针对这些局限性,本文提出了一种跨模态融合模型——情感融合转换器,该模型集成了脑电信号和文本数据,以增强英语写作中的情感检测。通过利用Transformer架构,我们的模型可以有效地捕获文本中的上下文关系,同时处理EEG信号以提取潜在的情绪状态。具体来说,情感融合变压器首先通过信号变换和滤波对脑电数据进行预处理,然后进行特征提取,补充文本嵌入。这些模式融合在一个统一的Transformer框架内,允许对情感的认知和生理维度进行整体观察。结果和讨论:实验结果表明,所提出的模型明显优于纯文本和纯脑电图方法,在不同情绪类别的准确性和f1分数上都有提高。通过弥合生理和文本情感检测之间的差距,该模型有望增强情感计算应用,从而在英语写作中实现更细致、更准确的情感分析。
{"title":"Cross-modality fusion with EEG and text for enhanced emotion detection in English writing.","authors":"Jing Wang, Ci Zhang","doi":"10.3389/fnbot.2024.1529880","DOIUrl":"10.3389/fnbot.2024.1529880","url":null,"abstract":"<p><strong>Introduction: </strong>Emotion detection in written text is critical for applications in human-computer interaction, affective computing, and personalized content recommendation. Traditional approaches to emotion detection primarily leverage textual features, using natural language processing techniques such as sentiment analysis, which, while effective, may miss subtle nuances of emotions. These methods often fall short in recognizing the complex, multimodal nature of human emotions, as they ignore physiological cues that could provide richer emotional insights.</p><p><strong>Methods: </strong>To address these limitations, this paper proposes Emotion Fusion-Transformer, a cross-modality fusion model that integrates EEG signals and textual data to enhance emotion detection in English writing. By utilizing the Transformer architecture, our model effectively captures contextual relationships within the text while concurrently processing EEG signals to extract underlying emotional states. Specifically, the Emotion Fusion-Transformer first preprocesses EEG data through signal transformation and filtering, followed by feature extraction that complements the textual embeddings. These modalities are fused within a unified Transformer framework, allowing for a holistic view of both the cognitive and physiological dimensions of emotion.</p><p><strong>Results and discussion: </strong>Experimental results demonstrate that the proposed model significantly outperforms text-only and EEG-only approaches, with improvements in both accuracy and F1-score across diverse emotional categories. This model shows promise for enhancing affective computing applications by bridging the gap between physiological and textual emotion detection, enabling more nuanced and accurate emotion analysis in English writing.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"18 ","pages":"1529880"},"PeriodicalIF":2.6,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11782560/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143079354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Neurorobotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1