首页 > 最新文献

Frontiers in Neurorobotics最新文献

英文 中文
SpikeAEC: a neuromodulation-based spiking controller for explore-exploit balancing in mobile robots. SpikeAEC:一种基于神经调节的峰值控制器,用于移动机器人的探索-利用平衡。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-05 eCollection Date: 2026-01-01 DOI: 10.3389/fnbot.2026.1757795
Canyang Liu, Yichen Liu, Yongqi Zhou, Buqin Su

Balancing exploration and exploitation remains a fundamental challenge in reliable mobile robot control, as conventional policies often converge on suboptimal behaviors. Inspired by the brain's division of labor for adaptive control, we propose SpikeAEC, a fully spiking, neuromodulated Actor-Explorer-Critic architecture designed to address this dilemma online within a closed-loop system. SpikeAEC comprises three specialized subnetworks operating in parallel: the Actor, inspired by the basal ganglia, proposes exploitative actions; the Explorer, modeled after the ACC-GPe-STN pathway, generates adaptive exploratory actions gated by a vigilance signal modulated by the accumulated global temporal-difference (TD) error; and the Critic, based on the ventral striatum, computes the TD error. The final action is selected by a separate, TAN-based Arbitrator, which probabilistically chooses between the Actor's and Explorer's action proposals according to recent performance and the TD error. These subnetworks are coupled through a unified three-factor learning framework that uses the TD signal and phasic neuromodulators (acetylcholine and dopamine) from the Arbitrator to drive pathway-specific synaptic plasticity. This online plasticity enhances the quality of action proposals and accelerates policy refinement. In simulation, SpikeAEC outperforms leading brain-inspired methods by converging 24% faster, reducing trajectory length by 18%, and increasing cumulative reward by over 5% against the top-performing baseline, all while maintaining consistency with established neurophysiological principles.

平衡探索和开发仍然是移动机器人可靠控制的基本挑战,因为传统的策略往往收敛于次优行为。受大脑自适应控制分工的启发,我们提出了SpikeAEC,这是一个完全尖峰的、神经调节的参与者-探索者-评论家体系结构,旨在解决闭环系统中的在线困境。SpikeAEC包括三个并行运行的专用子网络:Actor受到基底神经节的启发,提出剥削行为;Explorer以ACC-GPe-STN通路为模型,通过累积全局时间差(TD)误差调制的警戒信号来产生自适应探索动作;批评家基于腹侧纹状体计算TD误差。最终操作由一个独立的、基于tan的Arbitrator选择,它根据最近的性能和TD错误概率地在Actor和Explorer的操作建议之间进行选择。这些子网络通过统一的三因素学习框架耦合,该框架使用来自Arbitrator的TD信号和相位神经调节剂(乙酰胆碱和多巴胺)来驱动通路特异性突触可塑性。这种在线可塑性提高了行动建议的质量,加速了政策的细化。在模拟中,SpikeAEC的收敛速度提高了24%,轨迹长度缩短了18%,累计奖励比最佳基准提高了5%以上,同时保持了既定神经生理学原理的一致性,优于领先的大脑启发方法。
{"title":"SpikeAEC: a neuromodulation-based spiking controller for explore-exploit balancing in mobile robots.","authors":"Canyang Liu, Yichen Liu, Yongqi Zhou, Buqin Su","doi":"10.3389/fnbot.2026.1757795","DOIUrl":"https://doi.org/10.3389/fnbot.2026.1757795","url":null,"abstract":"<p><p>Balancing exploration and exploitation remains a fundamental challenge in reliable mobile robot control, as conventional policies often converge on suboptimal behaviors. Inspired by the brain's division of labor for adaptive control, we propose SpikeAEC, a fully spiking, neuromodulated Actor-Explorer-Critic architecture designed to address this dilemma online within a closed-loop system. SpikeAEC comprises three specialized subnetworks operating in parallel: the Actor, inspired by the basal ganglia, proposes exploitative actions; the Explorer, modeled after the ACC-GPe-STN pathway, generates adaptive exploratory actions gated by a vigilance signal modulated by the accumulated global temporal-difference (TD) error; and the Critic, based on the ventral striatum, computes the TD error. The final action is selected by a separate, TAN-based Arbitrator, which probabilistically chooses between the Actor's and Explorer's action proposals according to recent performance and the TD error. These subnetworks are coupled through a unified three-factor learning framework that uses the TD signal and phasic neuromodulators (acetylcholine and dopamine) from the Arbitrator to drive pathway-specific synaptic plasticity. This online plasticity enhances the quality of action proposals and accelerates policy refinement. In simulation, SpikeAEC outperforms leading brain-inspired methods by converging 24% faster, reducing trajectory length by 18%, and increasing cumulative reward by over 5% against the top-performing baseline, all while maintaining consistency with established neurophysiological principles.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"20 ","pages":"1757795"},"PeriodicalIF":2.8,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12999960/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147498274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neurorobotics for automotive manufacturing industry in era of embodied intelligence: a mini review. 具身智能时代汽车制造业的神经机器人技术综述。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-03 eCollection Date: 2026-01-01 DOI: 10.3389/fnbot.2026.1796043
Bangcheng Zhang, Qi Xia

As automotive manufacturing advances toward the industrial 5.0 era, traditional rigid automation production models are transitioning toward the embodied intelligence paradigm. Confronted with mass customization, diverse products, and small-batch production, the environment of automotive manufacturing exhibits high dynamism and unstructured characteristics. Different from traditional industrial intelligence based on static, hard-coded logic, robots enhance their cognitive abilities through closed-loop interaction with dynamic environments, inspired by bionic neural mechanisms, this shift enables robots to perform flexible and reliable operations in complex production scenarios. This paper analyzes the core role and key technologies of neural intelligence algorithms in reshaping perception, decision, and execution of industrial robot, while providing a systematic review of industrial robot evolution within the automotive industry, and provides a reliable path for future development.

随着汽车制造向工业5.0时代迈进,传统的刚性自动化生产模式正在向具身智能范式过渡。面对大规模定制、产品多样化和小批量生产,汽车制造环境呈现出高度动态性和非结构化特征。与基于静态硬编码逻辑的传统工业智能不同,机器人通过与动态环境的闭环交互来增强其认知能力,受仿生神经机制的启发,这种转变使机器人能够在复杂的生产场景中执行灵活可靠的操作。本文分析了神经智能算法在重塑工业机器人感知、决策和执行中的核心作用和关键技术,同时对工业机器人在汽车行业的演变进行了系统回顾,并为未来的发展提供了可靠的路径。
{"title":"Neurorobotics for automotive manufacturing industry in era of embodied intelligence: a mini review.","authors":"Bangcheng Zhang, Qi Xia","doi":"10.3389/fnbot.2026.1796043","DOIUrl":"https://doi.org/10.3389/fnbot.2026.1796043","url":null,"abstract":"<p><p>As automotive manufacturing advances toward the industrial 5.0 era, traditional rigid automation production models are transitioning toward the embodied intelligence paradigm. Confronted with mass customization, diverse products, and small-batch production, the environment of automotive manufacturing exhibits high dynamism and unstructured characteristics. Different from traditional industrial intelligence based on static, hard-coded logic, robots enhance their cognitive abilities through closed-loop interaction with dynamic environments, inspired by bionic neural mechanisms, this shift enables robots to perform flexible and reliable operations in complex production scenarios. This paper analyzes the core role and key technologies of neural intelligence algorithms in reshaping perception, decision, and execution of industrial robot, while providing a systematic review of industrial robot evolution within the automotive industry, and provides a reliable path for future development.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"20 ","pages":"1796043"},"PeriodicalIF":2.8,"publicationDate":"2026-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12992319/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147480512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AMSA-Net: attention-based multi-scale feature aggregation network for single image dehazing. AMSA-Net:基于注意力的单幅图像去雾多尺度特征聚合网络。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-17 eCollection Date: 2026-01-01 DOI: 10.3389/fnbot.2026.1698100
Shanqin Wang, Mengjun Miao, Miao Zhang

Problem: Deep learning technology promotes the development of single-image dehazing. However, many existing methods fail to fully consider the haze density and its spatial distribution, which limits the improvement of dehazing performance.

Proposed solution: To address this issue, we propose an attention-based multi-scale feature aggregation network (AMSA-Net) for single-image dehazing.

Method: AMSA-Net is an encoding and decoding structure. Its encoder and decoder are composed of multi-scale hybrid attention feature aggregation module (MSHA-FAM). The module can perceive the haze density and spatial information in the haze image, which helps to improve the dehazing effect. MSHA-FAM is composed of two key components: the scale-aware coordinate residual module (SCRM) and multi-scale feature refinement residual module (MSFRRM). SCRM uses improved coordinate attention to effectively capture haze density and spatial characteristics, thus significantly improving dehazing effect. MSFRRM extracts semantic features through up-sampling and down-sampling, and uses improved pixel attention mechanism to enhance key features. In the overall MSHA-FAM pipeline, SCRM first learns the density and spatial distribution characteristics of haze, then refines it through MSFRRM, so as to remove haze more effectively.

Key results: The experimental results demonstrate that our proposed AMSA-Net is superior to the comparison methods in terms of dehazing quality. Ablation studies further verify the effectiveness of the proposed modules.

Impact: In this work, we present AMSA-Net, which has achieved good dehazing performance and can provide high-quality input for subsequent computer vision tasks.

问题:深度学习技术促进了单图像去雾的发展。然而,现有的许多方法没有充分考虑雾霾密度及其空间分布,限制了除雾性能的提高。提出的解决方案:为了解决这个问题,我们提出了一种基于注意力的多尺度特征聚合网络(AMSA-Net)用于单幅图像去雾。方法:AMSA-Net是一种编解码结构。其编码器和解码器由多尺度混合注意特征聚合模块(MSHA-FAM)组成。该模块可以感知雾霾图像中的雾霾密度和空间信息,有助于提高除雾效果。MSHA-FAM由两个关键组件组成:尺度感知坐标残差模块(SCRM)和多尺度特征细化残差模块(MSFRRM)。SCRM利用改进的坐标注意力,有效捕捉雾霾密度和空间特征,显著提高了除雾效果。MSFRRM通过上采样和下采样提取语义特征,并使用改进的像素关注机制增强关键特征。在整个MSHA-FAM管道中,SCRM首先学习雾霾的密度和空间分布特征,然后通过MSFRRM对其进行细化,从而更有效地去除雾霾。关键结果:实验结果表明,我们提出的AMSA-Net在除雾质量方面优于比较方法。烧蚀研究进一步验证了所提出模块的有效性。影响:在这项工作中,我们提出了AMSA-Net,它取得了良好的除雾性能,可以为后续的计算机视觉任务提供高质量的输入。
{"title":"AMSA-Net: attention-based multi-scale feature aggregation network for single image dehazing.","authors":"Shanqin Wang, Mengjun Miao, Miao Zhang","doi":"10.3389/fnbot.2026.1698100","DOIUrl":"https://doi.org/10.3389/fnbot.2026.1698100","url":null,"abstract":"<p><strong>Problem: </strong>Deep learning technology promotes the development of single-image dehazing. However, many existing methods fail to fully consider the haze density and its spatial distribution, which limits the improvement of dehazing performance.</p><p><strong>Proposed solution: </strong>To address this issue, we propose an attention-based multi-scale feature aggregation network (AMSA-Net) for single-image dehazing.</p><p><strong>Method: </strong>AMSA-Net is an encoding and decoding structure. Its encoder and decoder are composed of multi-scale hybrid attention feature aggregation module (MSHA-FAM). The module can perceive the haze density and spatial information in the haze image, which helps to improve the dehazing effect. MSHA-FAM is composed of two key components: the scale-aware coordinate residual module (SCRM) and multi-scale feature refinement residual module (MSFRRM). SCRM uses improved coordinate attention to effectively capture haze density and spatial characteristics, thus significantly improving dehazing effect. MSFRRM extracts semantic features through up-sampling and down-sampling, and uses improved pixel attention mechanism to enhance key features. In the overall MSHA-FAM pipeline, SCRM first learns the density and spatial distribution characteristics of haze, then refines it through MSFRRM, so as to remove haze more effectively.</p><p><strong>Key results: </strong>The experimental results demonstrate that our proposed AMSA-Net is superior to the comparison methods in terms of dehazing quality. Ablation studies further verify the effectiveness of the proposed modules.</p><p><strong>Impact: </strong>In this work, we present AMSA-Net, which has achieved good dehazing performance and can provide high-quality input for subsequent computer vision tasks.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"20 ","pages":"1698100"},"PeriodicalIF":2.8,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12953467/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147354604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Emotion estimation from video footage with LSTM. 基于LSTM的视频片段情感估计。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-06 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1678984
Samer Attrah

Emotion estimation is a field that has been studied for a long time, and several approaches using machine learning models exist. This article presents BlendFER-Lite, an LSTM model that uses Blendshapes from the MediaPipe library to analyze facial expressions detected from a live-streamed camera feed. This model is trained on the FER2013 dataset and achieves 71% accuracy and an F1-score of 62%, meeting the accuracy benchmark for the FER2013 dataset while significantly reducing computational costs compared to current methods. For the sake of reproducibility, the code repository, datasets, and models proposed in this paper, in addition to the preprint, can be found on Hugging Face at: https://huggingface.co/papers/2501.13432.

Jel classification: D8, H51.

Msc classification: 35A01, 65L10, 65L12, 65L20, 65L70.

情绪估计是一个已经研究了很长时间的领域,并且存在几种使用机器学习模型的方法。本文介绍了BlendFER-Lite,这是一个LSTM模型,它使用MediaPipe库中的Blendshapes来分析从实时流媒体摄像机馈送中检测到的面部表情。该模型在FER2013数据集上进行训练,准确率达到71%,f1得分为62%,满足FER2013数据集的准确率基准,同时与现有方法相比显著降低了计算成本。为了再现性,本文中提出的代码库、数据集和模型,以及预印本,可以在以下网站上找到:https://huggingface.co/papers/2501.13432.Jel分类:D8, H51。Msc分类:35A01, 65L10, 65L12, 65L20, 65L70。
{"title":"Emotion estimation from video footage with LSTM.","authors":"Samer Attrah","doi":"10.3389/fnbot.2025.1678984","DOIUrl":"https://doi.org/10.3389/fnbot.2025.1678984","url":null,"abstract":"<p><p>Emotion estimation is a field that has been studied for a long time, and several approaches using machine learning models exist. This article presents BlendFER-Lite, an LSTM model that uses Blendshapes from the MediaPipe library to analyze facial expressions detected from a live-streamed camera feed. This model is trained on the FER2013 dataset and achieves 71% accuracy and an F1-score of 62%, meeting the accuracy benchmark for the FER2013 dataset while significantly reducing computational costs compared to current methods. For the sake of reproducibility, the code repository, datasets, and models proposed in this paper, in addition to the preprint, can be found on Hugging Face at: https://huggingface.co/papers/2501.13432.</p><p><strong>Jel classification: </strong>D8, H51.</p><p><strong>Msc classification: </strong>35A01, 65L10, 65L12, 65L20, 65L70.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1678984"},"PeriodicalIF":2.8,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12920442/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147270658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal sequence dynamics and convergence optimization in dual-stream LSTM networks for complex physiological state estimation. 复杂生理状态估计双流LSTM网络的多模态序列动力学及收敛优化。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-06 eCollection Date: 2026-01-01 DOI: 10.3389/fnbot.2026.1760494
Xiaoxiao Cao

Introduction: The integration of virtual simulation with intelligent modeling is crucial for advancing the scientization and personalization of volleyball physical training. This study aims to overcome the convergence instability and feature misalignment in modeling multimodal kinematic and physiological sequences.

Methods: A dynamical framework based on a Dual-Stream Long Short-Term Memory network integrated with a temporal attention mechanism is proposed. The framework decouples heterogeneous feature learning and optimizes temporal weight distribution.

Results: Experimental validation on complex motion state estimation demonstrates that the proposed model reduces load modeling error to 3.8% and achieves a motion classification accuracy of 93.1%. The velocity trajectory fitting coefficient of determination is 0.91 with a peak deviation of 0.05 m/s.

Discussion: These results confirm the effectiveness of the attention-based DS-LSTM in optimizing multimodal sequence modeling for training state estimation and feedback.

虚拟仿真与智能建模相结合是推进排球体能训练科学化、个性化的关键。该研究旨在克服多模态运动和生理序列建模中的收敛不稳定性和特征不对准问题。方法:提出了一种基于双流长短期记忆网络和时间注意机制的动态框架。该框架解耦了异构特征学习,优化了时间权重分布。结果:对复杂运动状态估计的实验验证表明,该模型将负载建模误差降低到3.8%,运动分类准确率达到93.1%。确定的速度轨迹拟合系数为0.91,峰值偏差为0.05 m/s。讨论:这些结果证实了基于注意力的DS-LSTM在优化用于训练状态估计和反馈的多模态序列建模方面的有效性。
{"title":"Multimodal sequence dynamics and convergence optimization in dual-stream LSTM networks for complex physiological state estimation.","authors":"Xiaoxiao Cao","doi":"10.3389/fnbot.2026.1760494","DOIUrl":"https://doi.org/10.3389/fnbot.2026.1760494","url":null,"abstract":"<p><strong>Introduction: </strong>The integration of virtual simulation with intelligent modeling is crucial for advancing the scientization and personalization of volleyball physical training. This study aims to overcome the convergence instability and feature misalignment in modeling multimodal kinematic and physiological sequences.</p><p><strong>Methods: </strong>A dynamical framework based on a Dual-Stream Long Short-Term Memory network integrated with a temporal attention mechanism is proposed. The framework decouples heterogeneous feature learning and optimizes temporal weight distribution.</p><p><strong>Results: </strong>Experimental validation on complex motion state estimation demonstrates that the proposed model reduces load modeling error to 3.8% and achieves a motion classification accuracy of 93.1%. The velocity trajectory fitting coefficient of determination is 0.91 with a peak deviation of 0.05 m/s.</p><p><strong>Discussion: </strong>These results confirm the effectiveness of the attention-based DS-LSTM in optimizing multimodal sequence modeling for training state estimation and feedback.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"20 ","pages":"1760494"},"PeriodicalIF":2.8,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12920526/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147270613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformer-based human-motion forecasting coupled with safe reinforcement learning for telepresence robot co-navigation. 基于变形金刚的人体运动预测与安全强化学习相结合的远程呈现机器人协同导航。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-02 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1697518
Heba G Mohamed, Muhammad Nasir Khan, Fawad Naseer, Muhammad Tahir, Mohsin Jamil

Introduction: Telepresence robots (TPRs) must co-navigate with humans in constrained hospital environments, where safety depends on anticipating rather than merely reacting to human motion. Existing approaches rarely integrate short-horizon human-motion forecasting with safety-constrained control, which reduces robustness in dense corridors and ward bays. This study addresses this gap by evaluating an anticipatory, safety-aware co-navigation framework for TPRs.

Methods: We developed a modular framework that couples a lightweight transformer-based forecaster that predicts multi-agent trajectories under occlusion with a safe reinforcement learning (RL) controller. The forecaster produces short-term distributions over pedestrian states that are embedded into the RL policy state and cost as risk-aware occupancy features. Safety is enforced via constrained policy optimization augmented by a run-time control barrier function (CBF) shield that filters unsafe actions. We benchmarked the approach against a social-force or dynamic window approach (DWA), an attention-based crowd-RL policy, and model predictive control (MPC) with CBF. Experiments were conducted across two hospital-like benchmarks (a crowded corridor and a four-bed ward), totaling 2,400 episodes. Outcomes included task success, collision count, minimum human-robot clearance, near-miss events ( ≤ 0.3 m), time-to-goal, CBF violations, and ablations removing forecasting and the CBF shield.

Results: Relative to the best-performing baseline, the proposed method improved task success by 21.6% and reduced collisions by 47.3%. Median minimum human-robot clearance increased by 0.19 m, and near-miss events decreased by 38.5%. Time-to-goal was maintained within +2.7% of MPC+CBF while incurring zero CBF violations under the shield. Ablation studies showed that removing forecasting degraded success by 14.2%, whereas removing the CBF shield increased constraint breaches from 0% to 6.1% of steps.

Discussion: Anticipatory perception combined with Safe-RL yields substantially safer and more reliable telepresence co-navigation in human-dense clinical layouts without sacrificing efficiency. The framework is modular, enabling alternative forecasters and safety shields. Limitations include sensitivity to forecast drift during abrupt changes in crowd flow. Future work will explore on-device adaptation, shared-autonomy overlays to incorporate operator intent, and prospective evaluations in live hospital workflows.

远程呈现机器人(tpr)必须在受限的医院环境中与人类共同导航,在这种环境中,安全取决于对人类运动的预测而不仅仅是反应。现有的方法很少将短视界人体运动预测与安全约束控制相结合,从而降低了在密集走廊和隔离区中的鲁棒性。本研究通过评估tpr的预期性、安全意识协同导航框架来解决这一差距。方法:我们开发了一个模块化框架,将基于变压器的轻型预测器与安全强化学习(RL)控制器耦合在一起,该预测器可以预测遮挡下的多智能体轨迹。预测器会生成行人状态的短期分布,这些分布嵌入到RL政策状态和成本中,作为风险意识占用特征。安全性是通过约束策略优化来实现的,该优化由过滤不安全操作的运行时控制屏障函数(CBF)屏蔽增强。我们将该方法与社会力量或动态窗口方法(DWA)、基于注意力的人群- rl策略和具有CBF的模型预测控制(MPC)进行了基准测试。实验在两个类似医院的基准上进行(一个拥挤的走廊和一个四床的病房),总共有2400集。结果包括任务成功、碰撞次数、最小人机间隙、近靶事件(≤0.3 m)、到达目标时间、CBF违规、消融去除预测和CBF屏蔽。结果:相对于最佳基准,该方法将任务成功率提高了21.6%,减少了47.3%的冲突。中位最小人机间隙增加了0.19米,未遂事件减少了38.5%。达到目标的时间保持在MPC+CBF的+2.7%以内,同时在屏蔽下零CBF违规。消融研究表明,去除预测会使成功率降低14.2%,而去除CBF屏蔽则会使约束破坏率从0%增加到6.1%。讨论:在不牺牲效率的情况下,预警性感知与Safe-RL相结合,在人员密集的临床布局中产生了更安全、更可靠的远程呈现协同导航。该框架是模块化的,支持替代预测器和安全防护。局限性包括对人群流量突变时预测漂移的敏感性。未来的工作将探索设备上的适应、共享自治覆盖,以结合操作员的意图,以及在现场医院工作流程中的前瞻性评估。
{"title":"Transformer-based human-motion forecasting coupled with safe reinforcement learning for telepresence robot co-navigation.","authors":"Heba G Mohamed, Muhammad Nasir Khan, Fawad Naseer, Muhammad Tahir, Mohsin Jamil","doi":"10.3389/fnbot.2025.1697518","DOIUrl":"https://doi.org/10.3389/fnbot.2025.1697518","url":null,"abstract":"<p><strong>Introduction: </strong>Telepresence robots (TPRs) must co-navigate with humans in constrained hospital environments, where safety depends on anticipating rather than merely reacting to human motion. Existing approaches rarely integrate short-horizon human-motion forecasting with safety-constrained control, which reduces robustness in dense corridors and ward bays. This study addresses this gap by evaluating an anticipatory, safety-aware co-navigation framework for TPRs.</p><p><strong>Methods: </strong>We developed a modular framework that couples a lightweight transformer-based forecaster that predicts multi-agent trajectories under occlusion with a safe reinforcement learning (RL) controller. The forecaster produces short-term distributions over pedestrian states that are embedded into the RL policy state and cost as risk-aware occupancy features. Safety is enforced via constrained policy optimization augmented by a run-time control barrier function (CBF) shield that filters unsafe actions. We benchmarked the approach against a social-force or dynamic window approach (DWA), an attention-based crowd-RL policy, and model predictive control (MPC) with CBF. Experiments were conducted across two hospital-like benchmarks (a crowded corridor and a four-bed ward), totaling 2,400 episodes. Outcomes included task success, collision count, minimum human-robot clearance, near-miss events ( ≤ 0.3 m), time-to-goal, CBF violations, and ablations removing forecasting and the CBF shield.</p><p><strong>Results: </strong>Relative to the best-performing baseline, the proposed method improved task success by 21.6% and reduced collisions by 47.3%. Median minimum human-robot clearance increased by 0.19 m, and near-miss events decreased by 38.5%. Time-to-goal was maintained within +2.7% of MPC+CBF while incurring zero CBF violations under the shield. Ablation studies showed that removing forecasting degraded success by 14.2%, whereas removing the CBF shield increased constraint breaches from 0% to 6.1% of steps.</p><p><strong>Discussion: </strong>Anticipatory perception combined with Safe-RL yields substantially safer and more reliable telepresence co-navigation in human-dense clinical layouts without sacrificing efficiency. The framework is modular, enabling alternative forecasters and safety shields. Limitations include sensitivity to forecast drift during abrupt changes in crowd flow. Future work will explore on-device adaptation, shared-autonomy overlays to incorporate operator intent, and prospective evaluations in live hospital workflows.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1697518"},"PeriodicalIF":2.8,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12907402/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146212846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Motion feature extraction based on semi-supervised learning and long short-term memory network in digital dance. 基于半监督学习和长短期记忆网络的数字舞蹈运动特征提取。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-29 eCollection Date: 2026-01-01 DOI: 10.3389/fnbot.2026.1743288
Xue Yang, Hanmin Sun, Yin Lyu, Yang Sun

Digital-image technology has broadened the creative space of dance, yet accurately capturing the semantic correspondence between low-level motion data and high-level dance key-points remains challenging, especially when labeled data are scarce. We aim to establish a lightweight, semi-supervised pipeline that can extract discriminative motion features from depth sequences and map them to 3-D key-points of dancers in real time. To achieve pixel-level alignment between dance movement targets and high-dimensional sensory data, we propose a novel LSTM-CNN (Long Short Term Memory-Convolutional Neural Network) framework. Temporal-context features are first extracted by LSTM, after which multi-dimensional spatial features are captured by three convolutional layers and one max-pooling layer; the fused representation is finally regressed to 3-D body key-points. To relieve class imbalance caused by complex postures, an online hard-example mining (OHEM) strategy together with a Dice-cross-entropy weighted loss (3:1) is embedded into semi-supervised learning, enabling the network to converge with only 20% labeled samples. Experiments on the public MSR-Action3D dataset (567 sequences, 20 actions) yielded an average recognition rate of 96.9%, surpassing the best comparison method (MSST) by 1.1%. On our self-established dataset (99 sequences, 11 actions) the accuracy reached 97.99% while training time was reduced by 35% compared with the previous best Multi_perspective_MHPCs approach. Both datasets show low RMSE (≤ 0.032) between predicted and ground-truth key-points, confirming spatial precision. The results demonstrate that the proposed model can reliably track subtle dance gestures under limited annotation, offering an efficient, low-cost solution for digital choreography, motion-style transfer and interactive stage performance.

数字图像技术拓宽了舞蹈的创作空间,但准确捕捉低水平运动数据和高水平舞蹈关键点之间的语义对应仍然是一项挑战,特别是在标记数据稀缺的情况下。我们的目标是建立一个轻量级的、半监督的管道,它可以从深度序列中提取鉴别运动特征,并将它们实时映射到舞者的三维关键点。为了实现舞蹈动作目标和高维感官数据之间的像素级对齐,我们提出了一种新的LSTM-CNN(长短期记忆-卷积神经网络)框架。首先通过LSTM提取时间-上下文特征,然后通过三个卷积层和一个最大池化层捕获多维空间特征;最后将融合表示回归到三维身体关键点。为了缓解复杂姿态引起的类不平衡,将在线硬例挖掘(OHEM)策略与Dice-cross-entropy加权损失(3:1)嵌入到半监督学习中,使网络仅能在20%的标记样本上实现融合。在公开的MSR-Action3D数据集(567个序列,20个动作)上进行实验,平均识别率为96.9%,比最佳对比方法(MSST)高出1.1%。在我们自己建立的数据集(99个序列,11个动作)上,准确率达到97.99%,训练时间比之前最好的Multi_perspective_MHPCs方法减少了35%。两个数据集的预测和真实关键点之间的RMSE都很低(≤0.032),证实了空间精度。结果表明,该模型能够在有限标注下可靠地跟踪细微的舞蹈动作,为数字编舞、动作风格转移和互动舞台表演提供了一种高效、低成本的解决方案。
{"title":"Motion feature extraction based on semi-supervised learning and long short-term memory network in digital dance.","authors":"Xue Yang, Hanmin Sun, Yin Lyu, Yang Sun","doi":"10.3389/fnbot.2026.1743288","DOIUrl":"10.3389/fnbot.2026.1743288","url":null,"abstract":"<p><p>Digital-image technology has broadened the creative space of dance, yet accurately capturing the semantic correspondence between low-level motion data and high-level dance key-points remains challenging, especially when labeled data are scarce. We aim to establish a lightweight, semi-supervised pipeline that can extract discriminative motion features from depth sequences and map them to 3-D key-points of dancers in real time. To achieve pixel-level alignment between dance movement targets and high-dimensional sensory data, we propose a novel LSTM-CNN (Long Short Term Memory-Convolutional Neural Network) framework. Temporal-context features are first extracted by LSTM, after which multi-dimensional spatial features are captured by three convolutional layers and one max-pooling layer; the fused representation is finally regressed to 3-D body key-points. To relieve class imbalance caused by complex postures, an online hard-example mining (OHEM) strategy together with a Dice-cross-entropy weighted loss (3:1) is embedded into semi-supervised learning, enabling the network to converge with only 20% labeled samples. Experiments on the public MSR-Action3D dataset (567 sequences, 20 actions) yielded an average recognition rate of 96.9%, surpassing the best comparison method (MSST) by 1.1%. On our self-established dataset (99 sequences, 11 actions) the accuracy reached 97.99% while training time was reduced by 35% compared with the previous best Multi_perspective_MHPCs approach. Both datasets show low RMSE (≤ 0.032) between predicted and ground-truth key-points, confirming spatial precision. The results demonstrate that the proposed model can reliably track subtle dance gestures under limited annotation, offering an efficient, low-cost solution for digital choreography, motion-style transfer and interactive stage performance.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"20 ","pages":"1743288"},"PeriodicalIF":2.8,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12894404/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146201255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Innovative approach of nonlinear controllers design for prosthetic knee performance. 仿生膝关节非线性控制器设计的创新方法。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-21 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1681298
Atif Rehman, Rimsha Ghias, Hammad Iqbal Sherazi, Nadia Sultan

Prosthetic knee joints are essential assistive technologies designed to replicate natural gait and improve mobility for individuals with lower-limb loss. This study presents a comprehensive nonlinear dynamic model of a two-degree-of-freedom prosthetic knee joint and introduces three robust nonlinear control strategies: Integral Sliding Mode Control, Conditional Super-Twisting Sliding Mode Control, and Conditional Adaptive Positive Semidefinite Barrier Function-based Sliding Mode Control. These controllers are designed to address the challenges associated with nonlinear joint dynamics, external disturbances, and modeling uncertainties during locomotion. To optimize control performance, the gain parameters of each controller were fine-tuned using Red Fox Optimization, a metaheuristic algorithm inspired by the intelligent hunting behavior of red foxes. Stability analysis is conducted using Lyapunov theory, and control effectiveness is evaluated through simulations in MATLAB/Simulink and validated via hardware-in-the-loop testing using a C2000 Delfino F28379D microcontroller. Among the three controllers, the CoBA-based approach demonstrated the highest tracking accuracy, fastest convergence, and smoothest torque profile. The close agreement between simulation and experimental results confirms the practical applicability of the proposed control framework, offering a promising solution for intelligent and adaptive prosthetic knee systems.

假肢膝关节是必不可少的辅助技术,旨在复制自然步态和改善个人下肢丧失的行动能力。本文建立了二自由度假体膝关节的综合非线性动力学模型,并介绍了三种鲁棒非线性控制策略:积分滑模控制、条件超扭转滑模控制和基于条件自适应正半定障碍函数的滑模控制。这些控制器旨在解决运动过程中与非线性关节动力学、外部干扰和建模不确定性相关的挑战。为了优化控制性能,采用红狐优化算法对每个控制器的增益参数进行微调,红狐优化算法是一种受红狐智能狩猎行为启发的元启发式算法。利用Lyapunov理论进行了稳定性分析,并通过MATLAB/Simulink仿真评估了控制效果,并通过C2000 Delfino F28379D单片机进行了硬件在环测试。在三种控制器中,基于coba的方法具有最高的跟踪精度、最快的收敛速度和最平稳的转矩分布。仿真结果与实验结果吻合较好,验证了所提控制框架的实用性,为智能自适应假膝系统提供了一种有前景的解决方案。
{"title":"Innovative approach of nonlinear controllers design for prosthetic knee performance.","authors":"Atif Rehman, Rimsha Ghias, Hammad Iqbal Sherazi, Nadia Sultan","doi":"10.3389/fnbot.2025.1681298","DOIUrl":"10.3389/fnbot.2025.1681298","url":null,"abstract":"<p><p>Prosthetic knee joints are essential assistive technologies designed to replicate natural gait and improve mobility for individuals with lower-limb loss. This study presents a comprehensive nonlinear dynamic model of a two-degree-of-freedom prosthetic knee joint and introduces three robust nonlinear control strategies: Integral Sliding Mode Control, Conditional Super-Twisting Sliding Mode Control, and Conditional Adaptive Positive Semidefinite Barrier Function-based Sliding Mode Control. These controllers are designed to address the challenges associated with nonlinear joint dynamics, external disturbances, and modeling uncertainties during locomotion. To optimize control performance, the gain parameters of each controller were fine-tuned using Red Fox Optimization, a metaheuristic algorithm inspired by the intelligent hunting behavior of red foxes. Stability analysis is conducted using Lyapunov theory, and control effectiveness is evaluated through simulations in MATLAB/Simulink and validated via hardware-in-the-loop testing using a C2000 Delfino F28379D microcontroller. Among the three controllers, the CoBA-based approach demonstrated the highest tracking accuracy, fastest convergence, and smoothest torque profile. The close agreement between simulation and experimental results confirms the practical applicability of the proposed control framework, offering a promising solution for intelligent and adaptive prosthetic knee systems.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1681298"},"PeriodicalIF":2.8,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12868236/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146124631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Machine learning and applied neuroscience, volume II. 编辑:机器学习和应用神经科学,第二卷。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-20 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1757770
Wellington Pinheiro Dos Santos, Vincenzo Conti, Orazio Gambino, Ganesh R Naik
{"title":"Editorial: Machine learning and applied neuroscience, volume II.","authors":"Wellington Pinheiro Dos Santos, Vincenzo Conti, Orazio Gambino, Ganesh R Naik","doi":"10.3389/fnbot.2025.1757770","DOIUrl":"https://doi.org/10.3389/fnbot.2025.1757770","url":null,"abstract":"","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1757770"},"PeriodicalIF":2.8,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12864380/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146118666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AMANet: a data-augmented multi-scale temporal attention convolutional network for motor imagery classification. AMANet:用于运动图像分类的数据增强多尺度时间注意卷积网络。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-09 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1704111
Shu Wang, Raofen Wang, Liang Chang, Jianzhen Wu, Lingyan Hu

Motor imagery brain-computer interface (MI-BCI) has garnered considerable attention due to its potential for neural plasticity. However, the limited number of MI-EEG samples per subject and the susceptibility of features to noise and artifacts posed significant challenges for achieving high decoding performance. To address this problem, a Data-Augmented Multi-Scale Temporal Attention Convolutional Network (AMANet) was proposed. The network mainly consisted of four modules. First, the data augmentation module comprises three steps: sliding-window segmentation to increase sample size, Common Spatial Pattern (CSP) extraction of discriminative spatial features, and linear scaling to enhance network robustness. Then, multi-scale temporal convolution was incorporated to dynamically extract temporal and spatial features. Subsequently, the ECA attention mechanism was integrated to realize the adaptive adjustment of the weights of different channels. Finally, depthwise separable convolution was utilized to fully integrate and classify the deep extraction of temporal and spatial features. In 10-fold cross-validation, the results show that AMANet achieves classification accuracies of 84.06 and 85.09% on the BCI Competition IV Datasets 2a and 2b, respectively, significantly outperforming baseline models such as Incep-EEGNet. On the High-Gamma dataset, AMANet attains a classification accuracy of 95.48%. These results demonstrate the excellent performance of AMANet in motor imagery decoding tasks.

运动意象脑机接口(MI-BCI)因其潜在的神经可塑性而受到广泛关注。然而,每个受试者的MI-EEG样本数量有限,特征对噪声和伪影的敏感性对实现高解码性能构成了重大挑战。为了解决这一问题,提出了一种数据增强的多尺度时间注意卷积网络(AMANet)。该网络主要由四个模块组成。首先,数据增强模块包括三个步骤:滑动窗口分割以增加样本量,共同空间模式(CSP)提取判别空间特征,线性缩放以增强网络的鲁棒性。然后,结合多尺度时间卷积,动态提取时空特征;随后,整合ECA注意机制,实现不同渠道权重的自适应调整。最后,利用深度可分卷积对时空特征的深度提取进行充分的整合和分类。在10倍交叉验证中,结果表明AMANet在BCI Competition IV数据集2a和2b上的分类准确率分别为84.06和85.09%,显著优于Incep-EEGNet等基线模型。在High-Gamma数据集上,AMANet的分类准确率达到95.48%。这些结果证明了AMANet在运动图像解码任务中的优异性能。
{"title":"AMANet: a data-augmented multi-scale temporal attention convolutional network for motor imagery classification.","authors":"Shu Wang, Raofen Wang, Liang Chang, Jianzhen Wu, Lingyan Hu","doi":"10.3389/fnbot.2025.1704111","DOIUrl":"10.3389/fnbot.2025.1704111","url":null,"abstract":"<p><p>Motor imagery brain-computer interface (MI-BCI) has garnered considerable attention due to its potential for neural plasticity. However, the limited number of MI-EEG samples per subject and the susceptibility of features to noise and artifacts posed significant challenges for achieving high decoding performance. To address this problem, a Data-Augmented Multi-Scale Temporal Attention Convolutional Network (AMANet) was proposed. The network mainly consisted of four modules. First, the data augmentation module comprises three steps: sliding-window segmentation to increase sample size, Common Spatial Pattern (CSP) extraction of discriminative spatial features, and linear scaling to enhance network robustness. Then, multi-scale temporal convolution was incorporated to dynamically extract temporal and spatial features. Subsequently, the ECA attention mechanism was integrated to realize the adaptive adjustment of the weights of different channels. Finally, depthwise separable convolution was utilized to fully integrate and classify the deep extraction of temporal and spatial features. In 10-fold cross-validation, the results show that AMANet achieves classification accuracies of 84.06 and 85.09% on the BCI Competition IV Datasets 2a and 2b, respectively, significantly outperforming baseline models such as Incep-EEGNet. On the High-Gamma dataset, AMANet attains a classification accuracy of 95.48%. These results demonstrate the excellent performance of AMANet in motor imagery decoding tasks.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1704111"},"PeriodicalIF":2.8,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12827673/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146051670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Neurorobotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1