首页 > 最新文献

Frontiers in Neurorobotics最新文献

英文 中文
Effective and efficient self-supervised masked model based on mixed feature training. 基于混合特征训练的有效自监督掩码模型。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-30 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1705970
Qingjiu Kang, Feng Liu, Chunliu Cai

Under the influence of Masked Language Modeling (MLM), Masked Image Modeling (MIM) employs an attention mechanism to perform masked training on images. However, processing a single image requires numerous iterations and substantial computational resources to reconstruct the masked regions, resulting in high computational complexity and significant time costs. To address this issue, we propose an Effective and Efficient self-supervised Masked model based on Mixed feature training (EESMM). First, we stack two images for encoding and input the fused features into the network, which not only reduces computational complexity but also enables the learning of more features. Second, during decoding, we obtain the decoding features corresponding to the original images based on the decoding features of the two input original images and the mixed images, and then construct a corresponding loss function to enhance feature representation. EESMM significantly reduces pre-training time without sacrificing accuracy, achieving 83% accuracy on ImageNet in just 363 h using four V100 GPUs-only one-tenth of the training time required by SimMIM. This validates that the method can substantially accelerate the pre-training process without noticeable performance degradation.

在蒙面语言建模(mask Language Modeling, MLM)的影响下,蒙面图像建模(mask Image Modeling, MIM)采用注意机制对图像进行蒙面训练。然而,处理单幅图像需要大量的迭代和大量的计算资源来重建被掩盖的区域,导致高的计算复杂度和显著的时间成本。为了解决这一问题,我们提出了一种基于混合特征训练(EESMM)的高效自监督屏蔽模型。首先,我们将两幅图像进行叠加编码,并将融合后的特征输入到网络中,这样不仅降低了计算复杂度,而且可以学习到更多的特征。其次,在解码过程中,根据输入的两幅原始图像和混合图像的解码特征,得到与原始图像对应的解码特征,并构造相应的损失函数来增强特征表示。EESMM在不牺牲精度的情况下显著减少了预训练时间,在使用4个V100 gpu的情况下,仅在363小时内就在ImageNet上实现了83%的准确率,仅为SimMIM所需训练时间的十分之一。这验证了该方法可以大大加快预训练过程,而不会出现明显的性能下降。
{"title":"Effective and efficient self-supervised masked model based on mixed feature training.","authors":"Qingjiu Kang, Feng Liu, Chunliu Cai","doi":"10.3389/fnbot.2025.1705970","DOIUrl":"https://doi.org/10.3389/fnbot.2025.1705970","url":null,"abstract":"<p><p>Under the influence of Masked Language Modeling (MLM), Masked Image Modeling (MIM) employs an attention mechanism to perform masked training on images. However, processing a single image requires numerous iterations and substantial computational resources to reconstruct the masked regions, resulting in high computational complexity and significant time costs. To address this issue, we propose an Effective and Efficient self-supervised Masked model based on Mixed feature training (EESMM). First, we stack two images for encoding and input the fused features into the network, which not only reduces computational complexity but also enables the learning of more features. Second, during decoding, we obtain the decoding features corresponding to the original images based on the decoding features of the two input original images and the mixed images, and then construct a corresponding loss function to enhance feature representation. EESMM significantly reduces pre-training time without sacrificing accuracy, achieving 83% accuracy on ImageNet in just 363 h using four V100 GPUs-only one-tenth of the training time required by SimMIM. This validates that the method can substantially accelerate the pre-training process without noticeable performance degradation.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1705970"},"PeriodicalIF":2.8,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12611909/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145540224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A simple robot suggests trunk rotation is essential for emergence of inside leading limb during quadruped galloping turns. 一个简单的机器人表明,躯干旋转对于四足疾驰转弯时内侧前肢的出现是必不可少的。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-23 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1628368
Tomoe Maeta, Shoei Hattori, Takeshi Kano, Akira Fukuhara, Akio Ishiguro

During turning maneuvers in the galloping gait of quadruped animals, a strong relationship exists between the turning direction and the sequence in which the forelimbs make ground contact: the outer forelimb acts as the "trailing limb" while the inner forelimb serves as the "leading limb." However, the control mechanisms underlying this behavior remain largely unclear. Understanding these mechanisms could deepen biological knowledge and assist in developing more agile robots. To address this issue, we hypothesized that decentralized interlimb coordination mechanism and trunk movement are essential for the emergence of an inside leading limb in a galloping turn. To test the hypothesis, we developed a quasi-quadruped robot with simplified wheeled hind limbs and variable trunk roll and yaw angles. For forelimb coordination, we implemented a simple decentralized control based on local load-dependent sensory feedback, utilizing trunk roll inclination and yaw bending as turning methods. Our experimental results confirmed that in addition to the decentralized control from previous studies which reproduces animal locomotion in a straight line, adjusting the trunk roll angle spontaneously generates a ground contact sequence similar to gallop turning in quadruped animals. Furthermore, roll inclination showed a greater influence than yaw bending on differentiating the leading and trailing limbs. This study suggests that physical interactions serve as a universal mechanism of locomotor control in both forward and turning movements of quadrupedal animals.

在四足动物疾驰步态的转弯动作中,前肢与地面接触的顺序与转弯方向有很强的关系:外前肢为“后肢”,内前肢为“前肢”。然而,这种行为背后的控制机制在很大程度上仍不清楚。了解这些机制可以加深生物学知识,并有助于开发更灵活的机器人。为了解决这一问题,我们假设分散的肢间协调机制和躯干运动是策马急转弯中出现内侧前肢的必要条件。为了验证这一假设,我们开发了一种具有简化轮式后肢和可变躯干侧倾和偏航角的准四足机器人。对于前肢协调,我们采用躯干侧倾和偏航弯曲作为转向方法,实现了基于局部负载相关感官反馈的简单分散控制。我们的实验结果证实,除了先前研究中再现动物直线运动的分散控制外,调整躯干滚动角度会自发地产生类似于四足动物的飞奔转弯的地面接触序列。横摇倾角比偏航弯曲对前后肢区分的影响更大。本研究表明,在四足动物的前进和转身运动中,身体相互作用是一种普遍的运动控制机制。
{"title":"A simple robot suggests trunk rotation is essential for emergence of inside leading limb during quadruped galloping turns.","authors":"Tomoe Maeta, Shoei Hattori, Takeshi Kano, Akira Fukuhara, Akio Ishiguro","doi":"10.3389/fnbot.2025.1628368","DOIUrl":"10.3389/fnbot.2025.1628368","url":null,"abstract":"<p><p>During turning maneuvers in the galloping gait of quadruped animals, a strong relationship exists between the turning direction and the sequence in which the forelimbs make ground contact: the outer forelimb acts as the \"trailing limb\" while the inner forelimb serves as the \"leading limb.\" However, the control mechanisms underlying this behavior remain largely unclear. Understanding these mechanisms could deepen biological knowledge and assist in developing more agile robots. To address this issue, we hypothesized that decentralized interlimb coordination mechanism and trunk movement are essential for the emergence of an inside leading limb in a galloping turn. To test the hypothesis, we developed a quasi-quadruped robot with simplified wheeled hind limbs and variable trunk roll and yaw angles. For forelimb coordination, we implemented a simple decentralized control based on local load-dependent sensory feedback, utilizing trunk roll inclination and yaw bending as turning methods. Our experimental results confirmed that in addition to the decentralized control from previous studies which reproduces animal locomotion in a straight line, adjusting the trunk roll angle spontaneously generates a ground contact sequence similar to gallop turning in quadruped animals. Furthermore, roll inclination showed a greater influence than yaw bending on differentiating the leading and trailing limbs. This study suggests that physical interactions serve as a universal mechanism of locomotor control in both forward and turning movements of quadrupedal animals.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1628368"},"PeriodicalIF":2.8,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12590251/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145481800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TSLNet: a hierarchical multi-head attention-enabled two-stream LSTM network for accurate pedestrian tracking and behavior recognition. TSLNet:一个分层的多头注意双流LSTM网络,用于准确的行人跟踪和行为识别。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-20 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1663565
Shouye Lv, Rui He, Xiaofei Cheng, Xiaoting Ma

Accurate pedestrian tracking and behavior recognition are essential for intelligent surveillance, smart transportation, and human-computer interaction systems. This paper introduces TSLNet, a Hierarchical Multi-Head Attention-Enabled Two-Stream LSTM Network, designed to overcome challenges such as environmental variability, high-density crowds, and diverse pedestrian movements in real-world video data. TSLNet combines a Two-Stream Convolutional Neural Network (Two-Stream CNN) with Long Short-Term Memory (LSTM) networks to effectively capture spatial and temporal features. The addition of a Multi-Head Attention mechanism allows the model to focus on relevant features in complex environments, while Hierarchical Classifiers within a Multi-Task Learning framework enable the simultaneous recognition of basic and complex behaviors. Experimental results on multiple public and proprietary datasets demonstrate that TSLNet significantly outperforms existing baseline models, achieving higher Accuracy, Precision, Recall, F1-Score, and Mean Average Precision (mAP) in behavior recognition, as well as superior Multiple Object Tracking Accuracy (MOTA) and ID F1 Score (IDF1) in pedestrian tracking. These improvements highlight TSLNet's effectiveness in enhancing tracking and recognition performance.

准确的行人跟踪和行为识别对于智能监控、智能交通和人机交互系统至关重要。本文介绍了TSLNet,一种分层多头注意力支持的双流LSTM网络,旨在克服现实世界视频数据中的环境可变性、高密度人群和不同行人运动等挑战。TSLNet将两流卷积神经网络(Two-Stream CNN)与长短期记忆(LSTM)网络相结合,有效地捕捉时空特征。增加了多头注意机制,使模型能够专注于复杂环境中的相关特征,而多任务学习框架中的分层分类器可以同时识别基本和复杂的行为。在多个公共和专有数据集上的实验结果表明,TSLNet显著优于现有的基线模型,在行为识别方面具有更高的准确率、精度、召回率、F1-Score和平均精度(mAP),在行人跟踪方面具有更高的多目标跟踪精度(MOTA)和IDF1分数(IDF1)。这些改进突出了TSLNet在提高跟踪和识别性能方面的有效性。
{"title":"TSLNet: a hierarchical multi-head attention-enabled two-stream LSTM network for accurate pedestrian tracking and behavior recognition.","authors":"Shouye Lv, Rui He, Xiaofei Cheng, Xiaoting Ma","doi":"10.3389/fnbot.2025.1663565","DOIUrl":"10.3389/fnbot.2025.1663565","url":null,"abstract":"<p><p>Accurate pedestrian tracking and behavior recognition are essential for intelligent surveillance, smart transportation, and human-computer interaction systems. This paper introduces TSLNet, a Hierarchical Multi-Head Attention-Enabled Two-Stream LSTM Network, designed to overcome challenges such as environmental variability, high-density crowds, and diverse pedestrian movements in real-world video data. TSLNet combines a Two-Stream Convolutional Neural Network (Two-Stream CNN) with Long Short-Term Memory (LSTM) networks to effectively capture spatial and temporal features. The addition of a Multi-Head Attention mechanism allows the model to focus on relevant features in complex environments, while Hierarchical Classifiers within a Multi-Task Learning framework enable the simultaneous recognition of basic and complex behaviors. Experimental results on multiple public and proprietary datasets demonstrate that TSLNet significantly outperforms existing baseline models, achieving higher Accuracy, Precision, Recall, F1-Score, and Mean Average Precision (mAP) in behavior recognition, as well as superior Multiple Object Tracking Accuracy (MOTA) and ID F1 Score (IDF1) in pedestrian tracking. These improvements highlight TSLNet's effectiveness in enhancing tracking and recognition performance.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1663565"},"PeriodicalIF":2.8,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12580384/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145444659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive-expert-weight-based load balance scheme for dynamic routing of MoE. 基于自适应专家权重的MoE动态路由负载均衡方案。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-14 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1590994
Jialin Wen, Xiaojun Li, Junping Yao, Xinyan Kong, Peng Cheng

Load imbalance is a major performance bottleneck in training mixture-of-experts (MoE) models, as unbalanced expert loads can lead to routing collapse. Most existing approaches address this issue by introducing auxiliary loss functions to balance the load; however, the hyperparameters within these loss functions often need to be tuned for different tasks. Furthermore, increasing the number of activated experts tends to exacerbate load imbalance, while fixing the activation count can reduce the model's confidence in handling difficult tasks. To address these challenges, this paper proposes a dynamically balanced routing strategy that employs a threshold-based dynamic routing algorithm. After each routing step, the method adjusts expert weights to influence the load distribution in the subsequent routing. Unlike loss-function-based balancing methods, our approach operates directly at the routing level, avoiding gradient perturbations that could degrade model quality, while dynamically routing to make more efficient use of computational resources. Experiments on Natural Language Understanding (NLU) benchmarks demonstrate that the proposed method achieves accuracy comparable to top-2 routing, while significantly reducing the load standard deviation (e.g., from 12.25 to 1.18 on MNLI). In addition, threshold-based dynamic expert activation reduces model parameters and provides a new perspective for mitigating load imbalance among experts.

负载不平衡是训练混合专家(MoE)模型的主要性能瓶颈,因为不平衡的专家负载可能导致路由崩溃。大多数现有方法通过引入辅助损失函数来平衡负载来解决这个问题;然而,这些损失函数中的超参数通常需要针对不同的任务进行调优。此外,增加激活专家的数量往往会加剧负载不平衡,而固定激活数会降低模型处理困难任务的置信度。为了解决这些问题,本文提出了一种采用基于阈值的动态路由算法的动态平衡路由策略。在每一步路由后,该方法调整专家权重,以影响后续路由中的负载分配。与基于损失函数的平衡方法不同,我们的方法直接在路由级别操作,避免了可能降低模型质量的梯度扰动,同时动态路由以更有效地利用计算资源。在自然语言理解(NLU)基准上的实验表明,所提出的方法达到了与top-2路由相当的精度,同时显著降低了负载标准差(例如,在MNLI上从12.25降至1.18)。此外,基于阈值的专家动态激活减少了模型参数,为缓解专家之间的负载不平衡提供了新的视角。
{"title":"Adaptive-expert-weight-based load balance scheme for dynamic routing of MoE.","authors":"Jialin Wen, Xiaojun Li, Junping Yao, Xinyan Kong, Peng Cheng","doi":"10.3389/fnbot.2025.1590994","DOIUrl":"10.3389/fnbot.2025.1590994","url":null,"abstract":"<p><p>Load imbalance is a major performance bottleneck in training mixture-of-experts (MoE) models, as unbalanced expert loads can lead to routing collapse. Most existing approaches address this issue by introducing auxiliary loss functions to balance the load; however, the hyperparameters within these loss functions often need to be tuned for different tasks. Furthermore, increasing the number of activated experts tends to exacerbate load imbalance, while fixing the activation count can reduce the model's confidence in handling difficult tasks. To address these challenges, this paper proposes a dynamically balanced routing strategy that employs a threshold-based dynamic routing algorithm. After each routing step, the method adjusts expert weights to influence the load distribution in the subsequent routing. Unlike loss-function-based balancing methods, our approach operates directly at the routing level, avoiding gradient perturbations that could degrade model quality, while dynamically routing to make more efficient use of computational resources. Experiments on Natural Language Understanding (NLU) benchmarks demonstrate that the proposed method achieves accuracy comparable to top-2 routing, while significantly reducing the load standard deviation (e.g., from 12.25 to 1.18 on MNLI). In addition, threshold-based dynamic expert activation reduces model parameters and provides a new perspective for mitigating load imbalance among experts.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1590994"},"PeriodicalIF":2.8,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12558867/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145400459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UHGAN: a dual-phase GAN with Hough-transform constraints for accurate farmland road extraction. UHGAN:一种具有hough变换约束的双相GAN,用于农田道路的精确提取。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-13 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1691300
Xinliang Wang, Yuan Ma

Introduction: Traditional methods for farmland road extraction, such as U-Net, often struggle with complex noise and geometric features, leading to discontinuous extraction and insufficient sensitivity. To address these limitations, this study proposes a novel dual-phase generative adversarial network (GAN) named UHGAN, which integrates Hough-transform constraints.

Methods: We designed a cascaded U-Net generator within a two-stage GAN framework. The Stage 1 GAN combines a differentiable Hough transform loss with cross-entropy loss to generate initial road masks. Subsequently, the Stage 2 U-Net refines these masks by repairing breakpoints and suppressing isolated noise.

Results: When evaluated on the WHU RuR+rural road dataset, the proposed UHGAN method achieved an accuracy of 0.826, a recall of 0.750, and an F1-score of 0.789. This represents a significant improvement over the single-stage U-Net (F1 = 0.756) and ResNet (F1 = 0.762) baselines.

Discussion: The results demonstrate that our approach effectively mitigates the issues of discontinuous extraction caused by the complex geometric shapes and partial occlusion characteristic of farmland roads. The integration of Hough-transform loss, an technique that has received limited attention in prior studies, proves to be highly beneficial. This method shows considerable promise for practical applications in rural infrastructure planning and precision agriculture.

传统的农田道路提取方法,如U-Net,往往与复杂的噪声和几何特征作斗争,导致提取不连续,灵敏度不足。为了解决这些限制,本研究提出了一种新的双相生成对抗网络(GAN),称为UHGAN,它集成了霍夫变换约束。方法:我们在两阶段GAN框架内设计了级联U-Net发生器。阶段1 GAN结合了可微霍夫变换损失和交叉熵损失来生成初始道路掩模。随后,阶段2 U-Net通过修复断点和抑制孤立噪声来改进这些掩模。结果:在WHU RuR+农村道路数据集上进行评估时,所提出的UHGAN方法的准确率为0.826,召回率为0.750,f1得分为0.789。这代表了单级U-Net (F1 = 0.756)和ResNet (F1 = 0.762)基线的显著改进。讨论:结果表明,我们的方法有效地缓解了农田道路复杂几何形状和部分遮挡特征导致的不连续提取问题。hough变换损失的积分技术在以往的研究中受到的关注有限,但被证明是非常有益的。该方法在农村基础设施规划和精准农业等方面具有广阔的应用前景。
{"title":"UHGAN: a dual-phase GAN with Hough-transform constraints for accurate farmland road extraction.","authors":"Xinliang Wang, Yuan Ma","doi":"10.3389/fnbot.2025.1691300","DOIUrl":"10.3389/fnbot.2025.1691300","url":null,"abstract":"<p><strong>Introduction: </strong>Traditional methods for farmland road extraction, such as U-Net, often struggle with complex noise and geometric features, leading to discontinuous extraction and insufficient sensitivity. To address these limitations, this study proposes a novel dual-phase generative adversarial network (GAN) named UHGAN, which integrates Hough-transform constraints.</p><p><strong>Methods: </strong>We designed a cascaded U-Net generator within a two-stage GAN framework. The Stage 1 GAN combines a differentiable Hough transform loss with cross-entropy loss to generate initial road masks. Subsequently, the Stage 2 U-Net refines these masks by repairing breakpoints and suppressing isolated noise.</p><p><strong>Results: </strong>When evaluated on the WHU RuR+rural road dataset, the proposed UHGAN method achieved an accuracy of 0.826, a recall of 0.750, and an F1-score of 0.789. This represents a significant improvement over the single-stage U-Net (F1 = 0.756) and ResNet (F1 = 0.762) baselines.</p><p><strong>Discussion: </strong>The results demonstrate that our approach effectively mitigates the issues of discontinuous extraction caused by the complex geometric shapes and partial occlusion characteristic of farmland roads. The integration of Hough-transform loss, an technique that has received limited attention in prior studies, proves to be highly beneficial. This method shows considerable promise for practical applications in rural infrastructure planning and precision agriculture.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1691300"},"PeriodicalIF":2.8,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12554662/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145388587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UAV-based intelligent traffic surveillance using recurrent neural networks and Swin transformer for dynamic environments. 基于循环神经网络和Swin变压器的动态环境下无人机智能交通监控。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-13 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1681341
Mohammed Alshehri, Ting Wu, Nouf Abdullah Almujally, Yahya AlQahtani, Muhammad Hanzla, Ahmad Jalal, Hui Liu

Introduction: Urban traffic congestion, environmental degradation, and road safety challenges necessitate intelligent aerial robotic systems capable of real-time adaptive decision-making. Unmanned Aerial Vehicles (UAVs), with their flexible deployment and high vantage point, offer a promising solution for large-scale traffic surveillance in complex urban environments. This study introduces a UAV-based neural framework that addresses challenges such as asymmetric vehicle motion, scale variations, and spatial inconsistencies in aerial imagery.

Methods: The proposed system integrates a multi-stage pipeline encompassing contrast enhancement and region-based clustering to optimize segmentation while maintaining computational efficiency for resource-constrained UAV platforms. Vehicle detection is carried out using a Recurrent Neural Network (RNN), optimized via a hybrid loss function combining cross-entropy and mean squared error to improve localization and confidence estimation. Upon detection, the system branches into two neural submodules: (i) a classification stream utilizing SURF and BRISK descriptors integrated with a Swin Transformer backbone for precise vehicle categorization, and (ii) a multi-object tracking stream employing DeepSORT, which fuses motion and appearance features within an affinity matrix for robust trajectory association.

Results: Comprehensive evaluation on three benchmark UAV datasets-AU-AIR, UAVDT, and VAID shows consistent and high performance. The model achieved detection precisions of 0.913, 0.930, and 0.920; tracking precisions of 0.901, 0.881, and 0.890; and classification accuracies of 92.14, 92.75, and 91.25%, respectively.

Discussion: These findings highlight the adaptability, robustness, and real-time viability of the proposed architecture in aerial traffic surveillance applications. By effectively integrating detection, classification, and tracking within a unified neural framework, the system contributes significant advancements to intelligent UAV-based traffic monitoring and supports future developments in smart city mobility and decision-making systems.

城市交通拥堵、环境恶化和道路安全挑战需要能够实时适应决策的智能空中机器人系统。无人机以其灵活的部署和优越的优势,为复杂城市环境下的大规模交通监控提供了一种很有前景的解决方案。本研究介绍了一种基于无人机的神经框架,该框架可以解决航拍图像中的飞行器运动不对称、尺度变化和空间不一致等问题。方法:该系统集成了包含对比度增强和基于区域的聚类的多级流水线,以优化分割,同时保持资源受限的无人机平台的计算效率。车辆检测使用递归神经网络(RNN)进行,并通过结合交叉熵和均方误差的混合损失函数进行优化,以提高定位和置信度估计。检测后,系统分为两个神经子模块:(i)使用SURF和BRISK描述符集成Swin Transformer主干的分类流,用于精确的车辆分类;(ii)使用DeepSORT的多目标跟踪流,将运动和外观特征融合在亲和矩阵中,以实现稳健的轨迹关联。结果:对au - air、UAVDT和VAID三个基准无人机数据集进行综合评估,结果一致,性能优异。模型的检测精度分别为0.913、0.930和0.920;跟踪精度分别为0.901、0.881、0.890;分类准确率分别为92.14%、92.75%和91.25%。讨论:这些发现突出了所提出的架构在空中交通监视应用中的适应性、鲁棒性和实时可行性。通过在统一的神经框架内有效地集成检测、分类和跟踪,该系统为基于无人机的智能交通监控做出了重大贡献,并支持智慧城市移动和决策系统的未来发展。
{"title":"UAV-based intelligent traffic surveillance using recurrent neural networks and Swin transformer for dynamic environments.","authors":"Mohammed Alshehri, Ting Wu, Nouf Abdullah Almujally, Yahya AlQahtani, Muhammad Hanzla, Ahmad Jalal, Hui Liu","doi":"10.3389/fnbot.2025.1681341","DOIUrl":"10.3389/fnbot.2025.1681341","url":null,"abstract":"<p><strong>Introduction: </strong>Urban traffic congestion, environmental degradation, and road safety challenges necessitate intelligent aerial robotic systems capable of real-time adaptive decision-making. Unmanned Aerial Vehicles (UAVs), with their flexible deployment and high vantage point, offer a promising solution for large-scale traffic surveillance in complex urban environments. This study introduces a UAV-based neural framework that addresses challenges such as asymmetric vehicle motion, scale variations, and spatial inconsistencies in aerial imagery.</p><p><strong>Methods: </strong>The proposed system integrates a multi-stage pipeline encompassing contrast enhancement and region-based clustering to optimize segmentation while maintaining computational efficiency for resource-constrained UAV platforms. Vehicle detection is carried out using a Recurrent Neural Network (RNN), optimized via a hybrid loss function combining cross-entropy and mean squared error to improve localization and confidence estimation. Upon detection, the system branches into two neural submodules: (i) a classification stream utilizing SURF and BRISK descriptors integrated with a Swin Transformer backbone for precise vehicle categorization, and (ii) a multi-object tracking stream employing DeepSORT, which fuses motion and appearance features within an affinity matrix for robust trajectory association.</p><p><strong>Results: </strong>Comprehensive evaluation on three benchmark UAV datasets-AU-AIR, UAVDT, and VAID shows consistent and high performance. The model achieved detection precisions of 0.913, 0.930, and 0.920; tracking precisions of 0.901, 0.881, and 0.890; and classification accuracies of 92.14, 92.75, and 91.25%, respectively.</p><p><strong>Discussion: </strong>These findings highlight the adaptability, robustness, and real-time viability of the proposed architecture in aerial traffic surveillance applications. By effectively integrating detection, classification, and tracking within a unified neural framework, the system contributes significant advancements to intelligent UAV-based traffic monitoring and supports future developments in smart city mobility and decision-making systems.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1681341"},"PeriodicalIF":2.8,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12554656/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145388582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-end robot intelligent obstacle avoidance method based on deep reinforcement learning with spatiotemporal transformer architecture. 基于深度强化学习的时空变压器结构端到端机器人智能避障方法。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-08 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1646336
Yuwen Zhou, Weizhong Zhang

To enhance the obstacle avoidance performance and autonomous decision-making capabilities of robots in complex dynamic environments, this paper proposes an end-to-end intelligent obstacle avoidance method that integrates deep reinforcement learning, spatiotemporal attention mechanisms, and a Transformer-based architecture. Current mainstream robot obstacle avoidance methods often rely on system architectures with separated perception and decision-making modules, which suffer from issues such as fragmented feature transmission, insufficient environmental modeling, and weak policy generalization. To address these problems, this paper adopts Deep Q-Network (DQN) as the core of reinforcement learning, guiding the robot to autonomously learn optimal obstacle avoidance strategies through interaction with the environment, effectively handling continuous decision-making problems in dynamic and uncertain scenarios. To overcome the limitations of traditional perception mechanisms in modeling the temporal evolution of obstacles, a spatiotemporal attention mechanism is introduced, jointly modeling spatial positional relationships and historical motion trajectories to enhance the model's perception of critical obstacle areas and potential collision risks. Furthermore, an end-to-end Transformer-based perception-decision architecture is designed, utilizing multi-head self-attention to perform high-dimensional feature modeling on multi-modal input information (such as LiDAR and depth images), and generating action policies through a decoding module. This completely eliminates the need for manual feature engineering and intermediate state modeling, constructing an integrated learning process of perception and decision-making. Experiments conducted in several typical obstacle avoidance simulation environments demonstrate that the proposed method outperforms existing mainstream deep reinforcement learning approaches in terms of obstacle avoidance success rate, path optimization, and policy convergence speed. It exhibits good stability and generalization capabilities, showing broad application prospects for deployment in real-world complex environments.

为了提高机器人在复杂动态环境中的避障性能和自主决策能力,本文提出了一种集成了深度强化学习、时空注意机制和基于transformer架构的端到端智能避障方法。目前主流的机器人避障方法往往依赖于感知和决策模块分离的系统架构,存在特征传输碎片化、环境建模不足、策略泛化弱等问题。针对这些问题,本文采用Deep Q-Network (DQN)作为强化学习的核心,引导机器人通过与环境的交互自主学习最优避障策略,有效处理动态和不确定场景下的连续决策问题。为克服传统感知机制在障碍物时间演化建模中的局限性,引入时空注意机制,联合建模空间位置关系和历史运动轨迹,增强模型对关键障碍物区域和潜在碰撞风险的感知能力。此外,设计了端到端基于transformer的感知决策架构,利用多头自关注对多模态输入信息(如LiDAR和深度图像)进行高维特征建模,并通过解码模块生成动作策略。这完全消除了人工特征工程和中间状态建模的需要,构建了一个感知和决策的集成学习过程。在几种典型避障仿真环境中进行的实验表明,该方法在避障成功率、路径优化和策略收敛速度方面优于现有主流深度强化学习方法。具有良好的稳定性和泛化能力,在现实复杂环境中部署具有广阔的应用前景。
{"title":"End-to-end robot intelligent obstacle avoidance method based on deep reinforcement learning with spatiotemporal transformer architecture.","authors":"Yuwen Zhou, Weizhong Zhang","doi":"10.3389/fnbot.2025.1646336","DOIUrl":"10.3389/fnbot.2025.1646336","url":null,"abstract":"<p><p>To enhance the obstacle avoidance performance and autonomous decision-making capabilities of robots in complex dynamic environments, this paper proposes an end-to-end intelligent obstacle avoidance method that integrates deep reinforcement learning, spatiotemporal attention mechanisms, and a Transformer-based architecture. Current mainstream robot obstacle avoidance methods often rely on system architectures with separated perception and decision-making modules, which suffer from issues such as fragmented feature transmission, insufficient environmental modeling, and weak policy generalization. To address these problems, this paper adopts Deep Q-Network (DQN) as the core of reinforcement learning, guiding the robot to autonomously learn optimal obstacle avoidance strategies through interaction with the environment, effectively handling continuous decision-making problems in dynamic and uncertain scenarios. To overcome the limitations of traditional perception mechanisms in modeling the temporal evolution of obstacles, a spatiotemporal attention mechanism is introduced, jointly modeling spatial positional relationships and historical motion trajectories to enhance the model's perception of critical obstacle areas and potential collision risks. Furthermore, an end-to-end Transformer-based perception-decision architecture is designed, utilizing multi-head self-attention to perform high-dimensional feature modeling on multi-modal input information (such as LiDAR and depth images), and generating action policies through a decoding module. This completely eliminates the need for manual feature engineering and intermediate state modeling, constructing an integrated learning process of perception and decision-making. Experiments conducted in several typical obstacle avoidance simulation environments demonstrate that the proposed method outperforms existing mainstream deep reinforcement learning approaches in terms of obstacle avoidance success rate, path optimization, and policy convergence speed. It exhibits good stability and generalization capabilities, showing broad application prospects for deployment in real-world complex environments.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1646336"},"PeriodicalIF":2.8,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12540343/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145354439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DWMamba: a structure-aware adaptive state space network for image quality improvement. DWMamba:用于图像质量改进的结构感知自适应状态空间网络。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-02 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1676787
Wenjun Fu, Xiaobin Wang, Chuncai Yang, Liang Zhang, Lin Feng, Zhixiong Huang

Overcoming visual degradation in challenging imaging scenarios is essential for accurate scene understanding. Although deep learning methods have integrated various perceptual capabilities and achieved remarkable progress, their high computational cost limits practical deployment under resource-constrained conditions. Moreover, when confronted with diverse degradation types, existing methods often fail to effectively model the inconsistent attenuation across color channels and spatial regions. To tackle these challenges, we propose DWMamba, a degradation-aware and weight-efficient Mamba network for image quality enhancement. Specifically, DWMamba introduces an Adaptive State Space Module (ASSM) that employs a dual-stream channel monitoring mechanism and a soft fusion strategy to capture global dependencies. With linear computational complexity, ASSM strengthens the models ability to address non-uniform degradations. In addition, by leveraging explicit edge priors and region partitioning as guidance, we design a Structure-guided Residual Fusion (SGRF) module to selectively fuse shallow and deep features, thereby restoring degraded details and enhancing low-light textures. Extensive experiments demonstrate that the proposed network delivers superior qualitative and quantitative performance, with strong generalization to diverse extreme lighting conditions. The code is available at https://github.com/WindySprint/DWMamba.

在具有挑战性的成像场景中克服视觉退化对于准确的场景理解至关重要。尽管深度学习方法整合了各种感知能力并取得了显著进展,但其高昂的计算成本限制了在资源受限条件下的实际部署。此外,当面对不同的退化类型时,现有的方法往往不能有效地模拟跨颜色通道和空间区域的不一致衰减。为了解决这些挑战,我们提出了DWMamba,一种用于图像质量增强的退化感知和重量高效的Mamba网络。具体来说,DWMamba引入了自适应状态空间模块(ASSM),该模块采用双流通道监控机制和软融合策略来捕获全局依赖关系。基于线性计算复杂度,ASSM增强了模型处理非均匀退化的能力。此外,利用显式边缘先验和区域划分作为指导,我们设计了一个结构引导残差融合(SGRF)模块,有选择地融合浅层和深层特征,从而恢复退化的细节并增强弱光纹理。大量的实验表明,所提出的网络具有优异的定性和定量性能,对各种极端光照条件具有很强的泛化能力。代码可在https://github.com/WindySprint/DWMamba上获得。
{"title":"DWMamba: a structure-aware adaptive state space network for image quality improvement.","authors":"Wenjun Fu, Xiaobin Wang, Chuncai Yang, Liang Zhang, Lin Feng, Zhixiong Huang","doi":"10.3389/fnbot.2025.1676787","DOIUrl":"10.3389/fnbot.2025.1676787","url":null,"abstract":"<p><p>Overcoming visual degradation in challenging imaging scenarios is essential for accurate scene understanding. Although deep learning methods have integrated various perceptual capabilities and achieved remarkable progress, their high computational cost limits practical deployment under resource-constrained conditions. Moreover, when confronted with diverse degradation types, existing methods often fail to effectively model the inconsistent attenuation across color channels and spatial regions. To tackle these challenges, we propose <b>DWMamba</b>, a degradation-aware and weight-efficient Mamba network for image quality enhancement. Specifically, DWMamba introduces an Adaptive State Space Module (ASSM) that employs a dual-stream channel monitoring mechanism and a soft fusion strategy to capture global dependencies. With linear computational complexity, ASSM strengthens the models ability to address non-uniform degradations. In addition, by leveraging explicit edge priors and region partitioning as guidance, we design a Structure-guided Residual Fusion (SGRF) module to selectively fuse shallow and deep features, thereby restoring degraded details and enhancing low-light textures. Extensive experiments demonstrate that the proposed network delivers superior qualitative and quantitative performance, with strong generalization to diverse extreme lighting conditions. The code is available at https://github.com/WindySprint/DWMamba.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1676787"},"PeriodicalIF":2.8,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12529553/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145328870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Approaches for retraining sEMG classifiers for upper-limb prostheses. 上肢假体表面肌电信号分类器再训练方法。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-10-01 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1627872
Tom Donnelly, Elena Seminati, Benjamin Metcalfe

Introduction: Abandonment rates for myoelectric upper limb prostheses can reach 44%, negatively affecting quality of life and increasing the risk of injury due to compensatory movements. Traditional myoelectric prostheses rely on conventional signal processing for the detection and classification of movement intentions, whereas machine learning offers more robust and complex control through pattern recognition. However, the non-stationary nature of surface electromyogram signals and their day-to-day variations significantly degrade the classification performance of machine learning algorithms. Although single-session classification accuracies exceeding 99% have been reported for 8-class datasets, multisession accuracies typically decrease by 23% between morning and afternoon sessions. Retraining or adaptation can mitigate this accuracy loss.

Methods: This study evaluates three paradigms for retraining a machine learning-based classifier: confidence scores, nearest neighbour window assessment, and a novel signal-to-noise ratio-based approach.

Results: The results show that all paradigms improve accuracy against no retraining, with the nearest neighbour and signal-to-noise ratio methods showing an average improvement 5% in accuracy over the confidence-based approach.

Discussion: The effectiveness of each paradigm is assessed based on intersession accuracy across 10 sessions recorded over 5 days using the NinaPro 6 dataset.

导言:肌电上肢假体的放弃率可达44%,对生活质量产生负面影响,并增加代偿运动引起的损伤风险。传统的肌电假肢依靠传统的信号处理来检测和分类运动意图,而机器学习通过模式识别提供更强大和复杂的控制。然而,表面肌电信号的非平稳性及其日常变化显著降低了机器学习算法的分类性能。虽然8类数据集的单会话分类准确率超过99%,但上午和下午的多会话分类准确率通常会下降23%。再培训或适应可以减轻这种准确性损失。方法:本研究评估了三种重新训练基于机器学习的分类器的范式:置信度评分、最近邻窗口评估和一种新的基于信噪比的方法。结果:结果表明,在不进行再训练的情况下,所有范式都提高了准确性,最近邻和信噪比方法比基于置信度的方法平均提高了5%的准确性。讨论:每个范例的有效性是基于使用NinaPro 6数据集记录的5天内10个会话的间歇准确性来评估的。
{"title":"Approaches for retraining sEMG classifiers for upper-limb prostheses.","authors":"Tom Donnelly, Elena Seminati, Benjamin Metcalfe","doi":"10.3389/fnbot.2025.1627872","DOIUrl":"10.3389/fnbot.2025.1627872","url":null,"abstract":"<p><strong>Introduction: </strong>Abandonment rates for myoelectric upper limb prostheses can reach 44%, negatively affecting quality of life and increasing the risk of injury due to compensatory movements. Traditional myoelectric prostheses rely on conventional signal processing for the detection and classification of movement intentions, whereas machine learning offers more robust and complex control through pattern recognition. However, the non-stationary nature of surface electromyogram signals and their day-to-day variations significantly degrade the classification performance of machine learning algorithms. Although single-session classification accuracies exceeding 99% have been reported for 8-class datasets, multisession accuracies typically decrease by 23% between morning and afternoon sessions. Retraining or adaptation can mitigate this accuracy loss.</p><p><strong>Methods: </strong>This study evaluates three paradigms for retraining a machine learning-based classifier: confidence scores, nearest neighbour window assessment, and a novel signal-to-noise ratio-based approach.</p><p><strong>Results: </strong>The results show that all paradigms improve accuracy against no retraining, with the nearest neighbour and signal-to-noise ratio methods showing an average improvement 5% in accuracy over the confidence-based approach.</p><p><strong>Discussion: </strong>The effectiveness of each paradigm is assessed based on intersession accuracy across 10 sessions recorded over 5 days using the NinaPro 6 dataset.</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1627872"},"PeriodicalIF":2.8,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12521216/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145307751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: Pre-training, personalization, and self-calibration: all a neural network-based myoelectric decoder needs. 校正:预训练,个性化和自校准:所有基于神经网络的肌电解码器需要。
IF 2.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-09-19 eCollection Date: 2025-01-01 DOI: 10.3389/fnbot.2025.1675642
Chenfei Ma, Xinyu Jiang, Kianoush Nazarpour

[This corrects the article DOI: 10.3389/fnbot.2025.1604453.].

[这更正了文章DOI: 10.3389/fnbot.2025.1604453.]。
{"title":"Correction: Pre-training, personalization, and self-calibration: all a neural network-based myoelectric decoder needs.","authors":"Chenfei Ma, Xinyu Jiang, Kianoush Nazarpour","doi":"10.3389/fnbot.2025.1675642","DOIUrl":"https://doi.org/10.3389/fnbot.2025.1675642","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.3389/fnbot.2025.1604453.].</p>","PeriodicalId":12628,"journal":{"name":"Frontiers in Neurorobotics","volume":"19 ","pages":"1675642"},"PeriodicalIF":2.8,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12498340/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145244414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Neurorobotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1