首页 > 最新文献

Frontiers in Robotics and AI最新文献

英文 中文
ATRON: Autonomous trash retrieval for oceanic neatness. ATRON:自动回收垃圾,保持海洋整洁。
IF 3 Q2 ROBOTICS Pub Date : 2026-01-22 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1718177
John Abanes, Hyunjin Jang, Behruz Erkinov, Jana Awadalla, Anthony Tzes

The subject of this article is the development of an unmanned surface vehicle (USV) for the removal of floating debris. A twin-hulled boat with four thrusters placed at the corners of the vessel is used for this purpose. The trash is collected in a storage space through a timing belt driven by an electric motor. The debris is accumulated in a funnel positioned at the front of the boat and subsequently raised through this belt into the garbage bin. The boat is equipped with a spherical camera, a long-range 2D LiDAR, and an inertial measurement unit (IMU) for simultaneous localization and mapping (SLAM). The floating debris is identified from rectified camera frames using YOLO, while the LiDAR and IMU concurrently provide the USV's odometry. Visual methods are utilized to determine the location of debris and obstacles in the 3D environment. The optimal order in which the debris is collected is determined by solving the orienteering problem, and the planar convex hull of the boat is combined with map and obstacle data via the Open Motion Planning Library (OMPL) to perform path planning. Pure pursuit is used to generate the trajectory from the obtained path. Limits on the linear and angular velocities are experimentally estimated, and a PID controller is tuned to improve path following. The USV is evaluated in an indoor swimming pool containing static obstacles and floating debris.

{"title":"ATRON: Autonomous trash retrieval for oceanic neatness.","authors":"John Abanes, Hyunjin Jang, Behruz Erkinov, Jana Awadalla, Anthony Tzes","doi":"10.3389/frobt.2025.1718177","DOIUrl":"https://doi.org/10.3389/frobt.2025.1718177","url":null,"abstract":"<p><p>The subject of this article is the development of an unmanned surface vehicle (USV) for the removal of floating debris. A twin-hulled boat with four thrusters placed at the corners of the vessel is used for this purpose. The trash is collected in a storage space through a timing belt driven by an electric motor. The debris is accumulated in a funnel positioned at the front of the boat and subsequently raised through this belt into the garbage bin. The boat is equipped with a spherical camera, a long-range 2D LiDAR, and an inertial measurement unit (IMU) for simultaneous localization and mapping (SLAM). The floating debris is identified from rectified camera frames using YOLO, while the LiDAR and IMU concurrently provide the USV's odometry. Visual methods are utilized to determine the location of debris and obstacles in the 3D environment. The optimal order in which the debris is collected is determined by solving the orienteering problem, and the planar convex hull of the boat is combined with map and obstacle data via the Open Motion Planning Library (OMPL) to perform path planning. Pure pursuit is used to generate the trajectory from the obtained path. Limits on the linear and angular velocities are experimentally estimated, and a PID controller is tuned to improve path following. The USV is evaluated in an indoor swimming pool containing static obstacles and floating debris.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1718177"},"PeriodicalIF":3.0,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12873476/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146144161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On vibration suppression of a tendon-driven soft robotic neck for the social robot HARU. 社交机器人HARU肌腱驱动软机器人颈部振动抑制研究。
IF 3 Q2 ROBOTICS Pub Date : 2026-01-22 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1698343
Seshagopalan Thorapalli Muralidharan, Randy Gomez, Georgios Andrikopoulos

Tendon-driven continuum actuators (TDCAs) provide compliant and lifelike motion that is well suited for human-robot interaction, but their structural compliance and underactuation make them susceptible to undesired vibrations, particularly along unactuated axes under load. This work addresses vibration suppression in such systems by proposing a real-time control strategy for a two-degree-of-freedom TDCA-based soft robotic neck used in the HARU social robot, where yaw motion is unactuated and prone to oscillations due to eccentric loading. The proposed approach combines a current-based tendon pretensioning routine, baseline PID control of the actuated pitch and roll axes, and a novel Coupled Axis Indirect Vibration Suppression (CIVS) mechanism. CIVS exploits mechanical cross-axis coupling by using high-pass filtered yaw acceleration from an inertial sensor to generate transient tension modulations in the actuated tendons, thereby increasing effective damping of the unactuated yaw mode without introducing additional hardware or compromising compliance. A classical sliding mode control is also implemented as a nonlinear benchmark under identical hardware constraints. Experimental validation on the HARU neck under representative loading conditions demonstrates that the proposed method achieves substantial vibration attenuation. Compared to the baseline controller, CIVS reduces yaw angular range by approximately 53% and yaw acceleration area by over 60%, while preserving smooth, expressive motion. The results further show that CIVS outperforms the sliding mode controller in suppressing vibrations on the unactuated axis. These findings indicate that indirect, feedback-driven tendon modulation provides an effective and low-complexity solution for mitigating load-induced vibrations in underactuated soft robotic systems, making the approach particularly suitable for interactive applications where safety, compliance, and motion expressivity are critical.

{"title":"On vibration suppression of a tendon-driven soft robotic neck for the social robot HARU.","authors":"Seshagopalan Thorapalli Muralidharan, Randy Gomez, Georgios Andrikopoulos","doi":"10.3389/frobt.2025.1698343","DOIUrl":"https://doi.org/10.3389/frobt.2025.1698343","url":null,"abstract":"<p><p>Tendon-driven continuum actuators (TDCAs) provide compliant and lifelike motion that is well suited for human-robot interaction, but their structural compliance and underactuation make them susceptible to undesired vibrations, particularly along unactuated axes under load. This work addresses vibration suppression in such systems by proposing a real-time control strategy for a two-degree-of-freedom TDCA-based soft robotic neck used in the HARU social robot, where yaw motion is unactuated and prone to oscillations due to eccentric loading. The proposed approach combines a current-based tendon pretensioning routine, baseline PID control of the actuated pitch and roll axes, and a novel Coupled Axis Indirect Vibration Suppression (CIVS) mechanism. CIVS exploits mechanical cross-axis coupling by using high-pass filtered yaw acceleration from an inertial sensor to generate transient tension modulations in the actuated tendons, thereby increasing effective damping of the unactuated yaw mode without introducing additional hardware or compromising compliance. A classical sliding mode control is also implemented as a nonlinear benchmark under identical hardware constraints. Experimental validation on the HARU neck under representative loading conditions demonstrates that the proposed method achieves substantial vibration attenuation. Compared to the baseline controller, CIVS reduces yaw angular range by approximately 53% and yaw acceleration area by over 60%, while preserving smooth, expressive motion. The results further show that CIVS outperforms the sliding mode controller in suppressing vibrations on the unactuated axis. These findings indicate that indirect, feedback-driven tendon modulation provides an effective and low-complexity solution for mitigating load-induced vibrations in underactuated soft robotic systems, making the approach particularly suitable for interactive applications where safety, compliance, and motion expressivity are critical.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1698343"},"PeriodicalIF":3.0,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12872555/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146144169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ecg2o: a seamless extension of g2o for equality-constrained factor graph optimization. ecg20: g20的无缝扩展,用于等式约束因子图优化。
IF 3 Q2 ROBOTICS Pub Date : 2026-01-20 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1698333
Anas Abdelkarim, Daniel Görges, Holger Voos

Factor graph optimization serves as a fundamental framework for robotic perception, enabling applications such as pose estimation, simultaneous localization and mapping (SLAM), structure-from-motion (SfM), and situational modeling. Traditionally, these methods solve unconstrained least squares problems using algorithms such as Gauss-Newton and Levenberg-Marquardt. However, extending factor graphs with native support for hard equality constraints can yield more accurate state estimates and broaden their applicability, particularly in planning and control. Prior work has addressed equality handling either by soft penalties (large weights) or by nested-loop Augmented Lagrangian (AL) schemes. In this paper, we propose a novel extension of factor graphs that seamlessly incorporates hard equality constraints without requiring additional optimization techniques. Our approach maintains the efficiency and flexibility of existing second-order optimization techniques while ensuring constraint satisfaction. To validate the proposed method, an autonomous-vehicle velocity-tracking optimal control problem is solved and benchmarked against an AL baseline, both implemented in g2o. Additional comparisons are conducted in GTSAM, where the penalty method and AL are evaluated against our g2o implementations. Moreover, we introduce ecg2o, a header-only C++ library that extends the widely used g2o library with full support for hard equality-constrained optimization. This library, along with demonstrative examples and the optimal control problem, is available as open source at https://github.com/snt-arg/ecg2o.

{"title":"ecg2o: a seamless extension of g2o for equality-constrained factor graph optimization.","authors":"Anas Abdelkarim, Daniel Görges, Holger Voos","doi":"10.3389/frobt.2025.1698333","DOIUrl":"10.3389/frobt.2025.1698333","url":null,"abstract":"<p><p>Factor graph optimization serves as a fundamental framework for robotic perception, enabling applications such as pose estimation, simultaneous localization and mapping (SLAM), structure-from-motion (SfM), and situational modeling. Traditionally, these methods solve unconstrained least squares problems using algorithms such as Gauss-Newton and Levenberg-Marquardt. However, extending factor graphs with native support for hard equality constraints can yield more accurate state estimates and broaden their applicability, particularly in planning and control. Prior work has addressed equality handling either by soft penalties (large weights) or by nested-loop Augmented Lagrangian (AL) schemes. In this paper, we propose a novel extension of factor graphs that seamlessly incorporates hard equality constraints without requiring additional optimization techniques. Our approach maintains the efficiency and flexibility of existing second-order optimization techniques while ensuring constraint satisfaction. To validate the proposed method, an autonomous-vehicle velocity-tracking optimal control problem is solved and benchmarked against an AL baseline, both implemented in g2o. Additional comparisons are conducted in GTSAM, where the penalty method and AL are evaluated against our g2o implementations. Moreover, we introduce ecg2o, a header-only C++ library that extends the widely used g2o library with full support for hard equality-constrained optimization. This library, along with demonstrative examples and the optimal control problem, is available as open source at https://github.com/snt-arg/ecg2o.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1698333"},"PeriodicalIF":3.0,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12864083/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Eyes ahead: a scoping review of technologies enabling humanoid robots to follow human gaze. 前方的眼睛:使人形机器人跟随人类目光的技术的范围审查。
IF 3 Q2 ROBOTICS Pub Date : 2026-01-16 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1723527
Leana Neuber, Wolf Culemann, Ruth Maria Ingendoh, Angela Heine

Gaze is a fundamental aspect of non-verbal communication in human interaction, playing an important role in conveying attention, intentions, and emotions. A key concept in gaze-based human interaction is joint attention, the focus of two individuals on an object in a shared environment. In the context of human-robot interaction (HRI), gaze-following has become a growing research area, as it enables robots to appear more socially intelligent, engaging, and likable. While various technical approaches have been developed to achieve this capability, a comprehensive overview of existing implementations has been lacking. This scoping review addresses this gap by systematically categorizing existing solutions, offering a structured perspective on how gaze-following behavior is technically realized in the field of HRI. A systematic search was conducted across four databases, leading to the identification of 28 studies. To structure the findings, a taxonomy was developed that categorizes technological approaches along three key functional dimensions: (1) environment tracking, which involves recognizing the objects in the robot's surroundings; (2) gaze tracking, which refers to detecting and interpreting human gaze direction; and (3) gaze-environment mapping, which connects gaze information with objects in the shared environment to enable appropriate robotic responses. Across studies, a distinction emerges between constrained and unconstrained solutions. While constrained approaches, such as predefined object positions, provide high accuracy, they are often limited to controlled settings. In contrast, unconstrained methods offer greater flexibility but pose significant technical challenges. The complexity of the implementations also varies significantly, from simple rule-based approaches to advanced, adaptive systems that integrate multiple data sources. These findings highlight ongoing challenges in achieving robust and real-time gaze-following in robots, particularly in dynamic, real-world environments. Future research should focus on refining unconstrained tracking methods and leveraging advances in machine learning and computer vision to make human-robot interactions more natural and socially intuitive.

凝视是人类非语言交际的一个基本方面,在传递注意力、意图和情感方面起着重要作用。基于注视的人类互动的一个关键概念是共同注意,即两个人在共享环境中对一个物体的关注。在人机交互(HRI)的背景下,目光跟随已经成为一个日益发展的研究领域,因为它使机器人看起来更有社交智能、更有吸引力、更讨人喜欢。虽然已经开发了各种技术方法来实现此功能,但缺乏对现有实现的全面概述。本文通过系统地对现有解决方案进行分类来解决这一问题,并提供了一个结构化的视角,说明注视跟随行为在HRI领域是如何在技术上实现的。在四个数据库中进行了系统搜索,最终确定了28项研究。为了构建研究结果,研究人员根据三个关键功能维度对技术方法进行了分类:(1)环境跟踪,涉及识别机器人周围环境中的物体;(2)注视跟踪,即检测和解释人的注视方向;(3)注视-环境映射,将注视信息与共享环境中的物体联系起来,使机器人能够做出适当的反应。在各种研究中,有约束和无约束的解决方案之间出现了区别。虽然约束方法(如预定义的对象位置)提供了高精度,但它们通常限于受控设置。相比之下,不受约束的方法提供了更大的灵活性,但也带来了重大的技术挑战。实现的复杂性也有很大的不同,从简单的基于规则的方法到集成多个数据源的高级自适应系统。这些发现突出了在机器人中实现鲁棒和实时注视跟踪的持续挑战,特别是在动态的现实世界环境中。未来的研究应该集中在改进无约束跟踪方法,利用机器学习和计算机视觉的进步,使人机交互更加自然和社会直观。
{"title":"Eyes ahead: a scoping review of technologies enabling humanoid robots to follow human gaze.","authors":"Leana Neuber, Wolf Culemann, Ruth Maria Ingendoh, Angela Heine","doi":"10.3389/frobt.2025.1723527","DOIUrl":"10.3389/frobt.2025.1723527","url":null,"abstract":"<p><p>Gaze is a fundamental aspect of non-verbal communication in human interaction, playing an important role in conveying attention, intentions, and emotions. A key concept in gaze-based human interaction is joint attention, the focus of two individuals on an object in a shared environment. In the context of human-robot interaction (HRI), gaze-following has become a growing research area, as it enables robots to appear more socially intelligent, engaging, and likable. While various technical approaches have been developed to achieve this capability, a comprehensive overview of existing implementations has been lacking. This scoping review addresses this gap by systematically categorizing existing solutions, offering a structured perspective on how gaze-following behavior is technically realized in the field of HRI. A systematic search was conducted across four databases, leading to the identification of 28 studies. To structure the findings, a taxonomy was developed that categorizes technological approaches along three key functional dimensions: (1) environment tracking, which involves recognizing the objects in the robot's surroundings; (2) gaze tracking, which refers to detecting and interpreting human gaze direction; and (3) gaze-environment mapping, which connects gaze information with objects in the shared environment to enable appropriate robotic responses. Across studies, a distinction emerges between constrained and unconstrained solutions. While constrained approaches, such as predefined object positions, provide high accuracy, they are often limited to controlled settings. In contrast, unconstrained methods offer greater flexibility but pose significant technical challenges. The complexity of the implementations also varies significantly, from simple rule-based approaches to advanced, adaptive systems that integrate multiple data sources. These findings highlight ongoing challenges in achieving robust and real-time gaze-following in robots, particularly in dynamic, real-world environments. Future research should focus on refining unconstrained tracking methods and leveraging advances in machine learning and computer vision to make human-robot interactions more natural and socially intuitive.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1723527"},"PeriodicalIF":3.0,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12856928/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146107600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-view object pose distribution tracking for pre-grasp planning on mobile robots. 移动机器人抓取前规划的多视点目标位姿分布跟踪。
IF 3 Q2 ROBOTICS Pub Date : 2026-01-14 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1683931
Lakshadeep Naik, Thorbjørn Mosekjær Iversen, Jakob Wilm, Norbert Krüger

The ability to track the 6D pose distribution of an object while a mobile manipulator is still approaching it can enable the robot to pre-plan grasps, thereby improving both the time efficiency and robustness of mobile manipulation. However, tracking a 6D object pose distribution on approach can be challenging due to the limited view of the robot camera. In this study, we present a particle filter-based multi-view 6D pose distribution tracking framework that compensates for the limited view of the moving robot camera while it approaches the object by fusing observations from external stationary cameras in the environment. We extend the single-view pose distribution tracking framework (PoseRBPF) to fuse observations from external cameras. We model the object pose posterior as a multi-modal distribution and introduce techniques for fusion, re-sampling, and pose estimation from the tracked distribution to effectively handle noisy and conflicting observations from different cameras. To evaluate our framework, we also contribute a real-world benchmark dataset. Our experiments demonstrate that the proposed framework yields a more accurate quantification of object pose and associated uncertainty than previous research. Finally, we apply our framework for pre-grasp planning on mobile robots, demonstrating its practical utility.

当移动机械手还在接近目标时,能够跟踪目标的6D位姿分布,使机器人能够预先计划抓取,从而提高移动操作的时间效率和鲁棒性。然而,由于机器人相机的视野有限,跟踪接近的6D物体姿态分布可能具有挑战性。在本研究中,我们提出了一种基于粒子滤波的多视角6D姿态分布跟踪框架,该框架通过融合环境中外部固定摄像机的观测值来补偿移动机器人摄像机在接近目标时的有限视角。我们扩展了单视图姿态分布跟踪框架(PoseRBPF),以融合来自外部摄像机的观测。我们将目标姿态后验建模为多模态分布,并引入融合、重采样和姿态估计技术,以有效处理来自不同相机的噪声和冲突观测。为了评估我们的框架,我们还提供了一个真实世界的基准数据集。我们的实验表明,所提出的框架比以前的研究更准确地量化了物体的姿态和相关的不确定性。最后,我们将我们的框架应用于移动机器人的预抓取规划,证明了它的实用性。
{"title":"Multi-view object pose distribution tracking for pre-grasp planning on mobile robots.","authors":"Lakshadeep Naik, Thorbjørn Mosekjær Iversen, Jakob Wilm, Norbert Krüger","doi":"10.3389/frobt.2025.1683931","DOIUrl":"https://doi.org/10.3389/frobt.2025.1683931","url":null,"abstract":"<p><p>The ability to track the 6D pose distribution of an object while a mobile manipulator is still approaching it can enable the robot to pre-plan grasps, thereby improving both the time efficiency and robustness of mobile manipulation. However, tracking a 6D object pose distribution on approach can be challenging due to the limited view of the robot camera. In this study, we present a particle filter-based multi-view 6D pose distribution tracking framework that compensates for the limited view of the moving robot camera while it approaches the object by fusing observations from external stationary cameras in the environment. We extend the single-view pose distribution tracking framework (PoseRBPF) to fuse observations from external cameras. We model the object pose posterior as a multi-modal distribution and introduce techniques for fusion, re-sampling, and pose estimation from the tracked distribution to effectively handle noisy and conflicting observations from different cameras. To evaluate our framework, we also contribute a real-world benchmark dataset. Our experiments demonstrate that the proposed framework yields a more accurate quantification of object pose and associated uncertainty than previous research. Finally, we apply our framework for pre-grasp planning on mobile robots, demonstrating its practical utility.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1683931"},"PeriodicalIF":3.0,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12848315/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146087630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automating PINN-based kinematic resolution of robotic joints using robotic process automation frameworks. 利用机器人过程自动化框架实现基于pup的机器人关节运动分辨率自动化。
IF 3 Q2 ROBOTICS Pub Date : 2026-01-13 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1752595
Parth Agrawal, Pavithra Sekar, Kush Kumar Kushwaha

This paper explores the integration of Physics-Informed Neural Networks (PINNs) and Robot Process Automation (RPA) tools in modeling and controlling rigid robotic joint motion. PINNs, which integrate physical laws with neural networks, offer a promising solution for solving both forward and inverse problems in robotics, while RPA tools provide the means to automate and streamline these processes. The study discusses various PINN techniques, including Extended PINNs, Hybrid PINNs, and Minimized Loss techniques, developed to address issues such as high training costs and slow convergence rates. By combining these advanced PINN approaches with RPA tools, the research aims to enhance the precision and efficiency of robot control, motion planning, and process automation, particularly in non-linear and dynamic coupling situations. We also examine PDE-Inspired PINNs for motion planning in robot navigation and manipulation by integrating it with ROS using the RPA tool itself for coordinating joints and angle movements, and exploring how RPA can facilitate the implementation of these models in real-world scenarios.

本文探讨了物理信息神经网络(PINNs)和机器人过程自动化(RPA)工具在建模和控制刚性机器人关节运动中的集成。pinn将物理定律与神经网络相结合,为解决机器人中的正向和逆问题提供了一个很有前途的解决方案,而RPA工具则提供了自动化和简化这些过程的手段。该研究讨论了各种PINN技术,包括扩展PINN、混合PINN和最小化损失技术,这些技术的发展是为了解决诸如训练成本高和收敛速度慢等问题。通过将这些先进的PINN方法与RPA工具相结合,该研究旨在提高机器人控制,运动规划和过程自动化的精度和效率,特别是在非线性和动态耦合情况下。我们还研究了pde启发的pinn在机器人导航和操作中的运动规划,通过使用RPA工具本身将其与ROS集成来协调关节和角度运动,并探索了RPA如何促进这些模型在现实世界中的实现。
{"title":"Automating PINN-based kinematic resolution of robotic joints using robotic process automation frameworks.","authors":"Parth Agrawal, Pavithra Sekar, Kush Kumar Kushwaha","doi":"10.3389/frobt.2025.1752595","DOIUrl":"https://doi.org/10.3389/frobt.2025.1752595","url":null,"abstract":"<p><p>This paper explores the integration of Physics-Informed Neural Networks (PINNs) and Robot Process Automation (RPA) tools in modeling and controlling rigid robotic joint motion. PINNs, which integrate physical laws with neural networks, offer a promising solution for solving both forward and inverse problems in robotics, while RPA tools provide the means to automate and streamline these processes. The study discusses various PINN techniques, including Extended PINNs, Hybrid PINNs, and Minimized Loss techniques, developed to address issues such as high training costs and slow convergence rates. By combining these advanced PINN approaches with RPA tools, the research aims to enhance the precision and efficiency of robot control, motion planning, and process automation, particularly in non-linear and dynamic coupling situations. We also examine PDE-Inspired PINNs for motion planning in robot navigation and manipulation by integrating it with ROS using the RPA tool itself for coordinating joints and angle movements, and exploring how RPA can facilitate the implementation of these models in real-world scenarios.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1752595"},"PeriodicalIF":3.0,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12834719/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bio-inspired cognitive robotics vs. embodied AI for socially acceptable, civilized robots. 受生物启发的认知机器人vs.社会可接受的文明机器人的具体化人工智能。
IF 3 Q2 ROBOTICS Pub Date : 2026-01-13 eCollection Date: 2026-01-01 DOI: 10.3389/frobt.2026.1714310
Pietro Morasso

Although cognitive robotics is still a work in progress, the trend is to "free" robots from the assembly lines of the third industrial revolution and allow them to "enter human society" in large numbers and many forms, as forecasted by Industry 4.0 and beyond. Cognitive robots are expected to be intelligent, designed to learn from experience and adapt to real-world situations rather than being preprogrammed with specific actions for all possible stimuli and environmental conditions. Moreover, such robots are supposed to interact closely with human partners, cooperating with them, and this implies that robot cognition must incorporate, in a deep sense, ethical principles and evolve, in conflict situations, decision-making capabilities that can be perceived as wise. Intelligence (true vs. false), ethics (right vs. wrong), and wisdom (good vs. bad) are interrelated but independent features of human behavior, and a similar framework should also characterize the behavior of cognitive agents integrated in human society. The working hypothesis formulated in this paper is that the propensity to consolidate ethically guided behavior, possibly evolving to some kind of wisdom, is a cognitive architecture based on bio-inspired embodied cognition, educated through development and social interaction. In contrast, the problem with current AI foundation models applied to robotics (EAI) is that, although they can be super-intelligent, they are intrinsically disembodied and ethically agnostic, independent of how much information was absorbed during training. We suggest that the proposed alternative may facilitate social acceptance and thus make such robots civilized.

尽管认知机器人仍在发展中,但趋势是将机器人从第三次工业革命的装配线中“解放”出来,并允许它们以大量和多种形式“进入人类社会”,正如工业4.0及以后所预测的那样。认知机器人被期望是智能的,能够从经验中学习并适应现实世界的情况,而不是预先为所有可能的刺激和环境条件设定特定的动作。此外,这样的机器人应该与人类伙伴密切互动,与他们合作,这意味着机器人的认知必须在深层意义上纳入道德原则,并在冲突情况下进化,决策能力可以被认为是明智的。智力(真与假)、伦理(对与错)和智慧(好与坏)是人类行为的相互关联但独立的特征,类似的框架也应该表征人类社会中集成的认知代理的行为。本文提出的工作假设是,巩固道德指导行为的倾向,可能演变成某种智慧,是一种基于生物启发的具身认知的认知架构,通过发展和社会互动受到教育。相比之下,目前应用于机器人技术的人工智能基础模型(EAI)的问题在于,尽管它们可以超级智能,但它们本质上是无实体的,在伦理上是不可知论的,与训练过程中吸收了多少信息无关。我们认为,提出的替代方案可能会促进社会接受,从而使这些机器人变得文明。
{"title":"Bio-inspired cognitive robotics vs. embodied AI for socially acceptable, civilized robots.","authors":"Pietro Morasso","doi":"10.3389/frobt.2026.1714310","DOIUrl":"https://doi.org/10.3389/frobt.2026.1714310","url":null,"abstract":"<p><p>Although cognitive robotics is still a work in progress, the trend is to \"free\" robots from the assembly lines of the third industrial revolution and allow them to \"enter human society\" in large numbers and many forms, as forecasted by Industry 4.0 and beyond. Cognitive robots are expected to be intelligent, designed to learn from experience and adapt to real-world situations rather than being preprogrammed with specific actions for all possible stimuli and environmental conditions. Moreover, such robots are supposed to interact closely with human partners, cooperating with them, and this implies that robot cognition must incorporate, in a deep sense, ethical principles and evolve, in conflict situations, decision-making capabilities that can be perceived as wise. Intelligence (true vs. false), ethics (right vs. wrong), and wisdom (good vs. bad) are interrelated but independent features of human behavior, and a similar framework should also characterize the behavior of cognitive agents integrated in human society. The working hypothesis formulated in this paper is that the propensity to consolidate ethically guided behavior, possibly evolving to some kind of wisdom, is a cognitive architecture based on bio-inspired embodied cognition, educated through development and social interaction. In contrast, the problem with current AI foundation models applied to robotics (EAI) is that, although they can be super-intelligent, they are intrinsically disembodied and ethically agnostic, independent of how much information was absorbed during training. We suggest that the proposed alternative may facilitate social acceptance and thus make such robots <i>civilized</i>.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"13 ","pages":"1714310"},"PeriodicalIF":3.0,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12834747/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Public acceptance of cybernetic avatars in the service sector: evidence from a large-scale survey. 公众对服务行业控制论化身的接受程度:来自大规模调查的证据。
IF 3 Q2 ROBOTICS Pub Date : 2026-01-12 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1719342
Laura Aymerich-Franch, Tarek Taha, Takahiro Miyashita, Hiroko Kamide, Hiroshi Ishiguro, Paolo Dario

Cybernetic avatars are hybrid interaction robots or digital representations that combine autonomous capabilities with teleoperated control. This study investigates the acceptance of cybernetic avatars, with particular emphasis on robot avatars for customer service. Specifically, we explore how acceptance varies as a function of modality (physical vs. virtual), robot appearance (e.g., android, robotic-looking, cartoonish), deployment settings (e.g., shopping malls, hotels, hospitals), and functional tasks (e.g., providing information, patrolling). To this end, we conducted a large-scale survey with over 1,000 participants in Dubai. As one of the most multicultural societies worldwide, Dubai offers a rare opportunity to capture opinions from multiple cultural clusters within a single setting simultaneously, thereby overcoming the limitations of nationally bound samples and providing a more global picture of acceptance. Overall, cybernetic avatars received a high level of acceptance, with physical robot avatars receiving higher acceptance than digital avatars. In terms of appearance, robot avatars with a highly anthropomorphic robotic appearance were the most accepted, followed by cartoonish designs and androids. Animal-like appearances received the lowest level of acceptance. Among the tasks, providing information and guidance was rated as the most valued. Shopping malls, airports, public transport stations, and museums were the settings with the highest acceptance, whereas healthcare-related spaces received lower levels of support. An analysis by community cluster revealed, among other findings, that Emirati respondents were particularly accepting of android appearances, whereas participants from the 'Other Asia' cluster were particularly accepting of cartoonish appearances. Our study underscores the importance of incorporating citizen feedback from the early stages of design and deployment to enhance societal acceptance of cybernetic avatars.

控制论化身是混合交互机器人或数字表示,将自主能力与远程操作控制相结合。本研究调查了控制论化身的接受程度,特别强调了机器人化身的客户服务。具体来说,我们探讨了接受度如何随着形态(物理vs虚拟)、机器人外观(例如,机器人、机器人外观、卡通)、部署设置(例如,商场、酒店、医院)和功能任务(例如,提供信息、巡逻)的函数而变化。为此,我们在迪拜进行了一项1000多人参与的大规模调查。作为世界上最多元文化的社会之一,迪拜提供了一个难得的机会,在一个单一的环境中同时捕捉多个文化集群的观点,从而克服了国家绑定样本的局限性,并提供了一个更全球的接受情况。总的来说,控制论的化身获得了很高的接受度,物理机器人的化身比数字化身获得了更高的接受度。在外观方面,具有高度拟人化机器人外观的机器人化身是最被接受的,其次是卡通设计和机器人。动物样的外表得到的接受程度最低。在这些任务中,提供信息和指导被认为是最重要的。购物中心、机场、公共交通站和博物馆是接受度最高的场所,而医疗保健相关场所的支持度较低。社区集群的一项分析显示,阿联酋的受访者特别接受机器人的外表,而来自“其他亚洲”集群的参与者特别接受卡通的外表。我们的研究强调了从设计和部署的早期阶段纳入公民反馈以提高社会对控制论化身的接受度的重要性。
{"title":"Public acceptance of cybernetic avatars in the service sector: evidence from a large-scale survey.","authors":"Laura Aymerich-Franch, Tarek Taha, Takahiro Miyashita, Hiroko Kamide, Hiroshi Ishiguro, Paolo Dario","doi":"10.3389/frobt.2025.1719342","DOIUrl":"10.3389/frobt.2025.1719342","url":null,"abstract":"<p><p>Cybernetic avatars are hybrid interaction robots or digital representations that combine autonomous capabilities with teleoperated control. This study investigates the acceptance of cybernetic avatars, with particular emphasis on robot avatars for customer service. Specifically, we explore how acceptance varies as a function of modality (physical vs. virtual), robot appearance (e.g., android, robotic-looking, cartoonish), deployment settings (e.g., shopping malls, hotels, hospitals), and functional tasks (e.g., providing information, patrolling). To this end, we conducted a large-scale survey with over 1,000 participants in Dubai. As one of the most multicultural societies worldwide, Dubai offers a rare opportunity to capture opinions from multiple cultural clusters within a single setting simultaneously, thereby overcoming the limitations of nationally bound samples and providing a more global picture of acceptance. Overall, cybernetic avatars received a high level of acceptance, with physical robot avatars receiving higher acceptance than digital avatars. In terms of appearance, robot avatars with a highly anthropomorphic robotic appearance were the most accepted, followed by cartoonish designs and androids. Animal-like appearances received the lowest level of acceptance. Among the tasks, providing information and guidance was rated as the most valued. Shopping malls, airports, public transport stations, and museums were the settings with the highest acceptance, whereas healthcare-related spaces received lower levels of support. An analysis by community cluster revealed, among other findings, that Emirati respondents were particularly accepting of android appearances, whereas participants from the 'Other Asia' cluster were particularly accepting of cartoonish appearances. Our study underscores the importance of incorporating citizen feedback from the early stages of design and deployment to enhance societal acceptance of cybernetic avatars.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1719342"},"PeriodicalIF":3.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12832308/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146067573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Solving robotics tasks with prior demonstration via exploration-efficient deep reinforcement learning. 通过探索高效的深度强化学习解决机器人任务。
IF 3 Q2 ROBOTICS Pub Date : 2026-01-12 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1682200
Chengyandan Shen, Christoffer Sloth

This paper proposes an exploration-efficient deep reinforcement learning with reference (DRLR) policy framework for learning robotics tasks incorporating demonstrations. The DRLR framework is developed based on an imitation bootstrapped reinforcement learning (IBRL) algorithm. Here, we propose to improve IBRL by modifying the action selection module. The proposed action selection module provides a calibrated Q-value, which mitigates the bootstrapping error that otherwise leads to inefficient exploration. Furthermore, to prevent the reinforcement learning (RL) policy from converging to a sub-optimal policy, soft actor-critic (SAC) is used as the RL policy instead of twin delayed DDPG (TD3). The effectiveness of our method in mitigating the bootstrapping error and preventing overfitting is empirically validated by learning two robotics tasks: bucket loading and open drawer, which require extensive interactions with the environment. Simulation results also demonstrate the robustness of the DRLR framework across tasks with both low and high state-action dimensions and varying demonstration qualities. To evaluate the developed framework on a real-world industrial robotics task, the bucket loading task is deployed on a real wheel loader. The sim-to-real results validate the successful deployment of the DRLR framework.

本文提出了一种探索高效的深度强化学习参考(DRLR)策略框架,用于学习包含演示的机器人任务。DRLR框架是基于模仿自举强化学习(IBRL)算法开发的。在此,我们建议通过修改动作选择模块来改进IBRL。所提出的动作选择模块提供了一个校准的q值,从而减轻了引导错误,否则会导致低效的探索。此外,为了防止强化学习(RL)策略收敛到次优策略,使用软行为者批评(SAC)作为强化学习策略,而不是双延迟DDPG (TD3)。我们的方法在减轻自举误差和防止过拟合方面的有效性通过学习两个机器人任务得到了经验验证:桶装载和打开抽屉,这两个任务需要与环境进行广泛的交互。仿真结果还证明了DRLR框架在具有低和高状态-动作维度和不同演示质量的任务中的鲁棒性。为了在实际工业机器人任务中评估所开发的框架,将铲斗装载任务部署在实际轮式装载机上。模拟到真实的结果验证了DRLR框架的成功部署。
{"title":"Solving robotics tasks with prior demonstration via exploration-efficient deep reinforcement learning.","authors":"Chengyandan Shen, Christoffer Sloth","doi":"10.3389/frobt.2025.1682200","DOIUrl":"https://doi.org/10.3389/frobt.2025.1682200","url":null,"abstract":"<p><p>This paper proposes an exploration-efficient deep reinforcement learning with reference (DRLR) policy framework for learning robotics tasks incorporating demonstrations. The DRLR framework is developed based on an imitation bootstrapped reinforcement learning (IBRL) algorithm. Here, we propose to improve IBRL by modifying the action selection module. The proposed action selection module provides a calibrated Q-value, which mitigates the bootstrapping error that otherwise leads to inefficient exploration. Furthermore, to prevent the reinforcement learning (RL) policy from converging to a sub-optimal policy, soft actor-critic (SAC) is used as the RL policy instead of twin delayed DDPG (TD3). The effectiveness of our method in mitigating the bootstrapping error and preventing overfitting is empirically validated by learning two robotics tasks: bucket loading and open drawer, which require extensive interactions with the environment. Simulation results also demonstrate the robustness of the DRLR framework across tasks with both low and high state-action dimensions and varying demonstration qualities. To evaluate the developed framework on a real-world industrial robotics task, the bucket loading task is deployed on a real wheel loader. The sim-to-real results validate the successful deployment of the DRLR framework.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1682200"},"PeriodicalIF":3.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12832430/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146067621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robot speech: how variability matters for child-robot interactions. 机器人语言:孩子-机器人互动的可变性。
IF 3 Q2 ROBOTICS Pub Date : 2026-01-12 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1725423
Adriana Hanulíková, Nils Frederik Tolksdorf, Sarah Kapp

Spoken language is one of the most powerful tools for humans to learn, exchange information, and build social relationships. An inherent feature of spoken language is large within- and between-speaker variation across linguistic levels, from sound acoustics to prosodic, lexical, syntactic, and pragmatic choices that differ from written language. Despite advancements in text-to-speech and language models used in social robots, synthetic speech lacks human-like variability. This limitation is especially critical in interactions with children, whose developmental needs require adaptive speech input and ethically responsible design. In child-robot interaction research, robot speech design has received less attention than appearance or multimodal features. We argue that speech variability in robots needs closer examination, considering both how humans adapt to robot speech and how robots could adjust to human speech. We discuss three tensions: (1) feasibility, because dynamic human speech variability is technically challenging to model; (2) desirability, because variability may both enhance and hinder learning, usability, and trust; and (3) ethics, because digital human-like speech risks deception, while robot speech varieties may support transparency. We suggest approaching variability as a design tool while being transparent about the robot's role and capabilities. The key question is which types of variation benefit children's socio-cognitive and language learning, at which developmental stage, in which context, depending on the robot's role and persona. Integrating insights across disciplines, we outline directions for studying how specific dimensions of variability affect comprehension, engagement, language learning, and for developing vocal interactivity that is engaging, ethically transparent, and developmentally appropriate.

口语是人类学习、交换信息和建立社会关系最有力的工具之一。口语的一个固有特征是在不同的语言层次上,从声学到韵律、词汇、句法和语用的选择,口语在说话者内部和说话者之间的差异很大,这与书面语言不同。尽管在社交机器人中使用的文本到语音和语言模型方面取得了进步,但合成语音缺乏类似人类的可变性。这种限制在与儿童的互动中尤为重要,因为儿童的发展需要适应性语音输入和合乎道德的设计。在儿童机器人交互研究中,机器人语音设计受到的关注不如外观或多模态特征。我们认为机器人的语言可变性需要更仔细的研究,考虑人类如何适应机器人的语言,以及机器人如何适应人类的语言。我们讨论了三个矛盾:(1)可行性,因为动态人类语音变异性在技术上具有挑战性;(2)可取性,因为可变性既可以增强也可以阻碍学习、可用性和信任;(3)伦理,因为数字类人语音有欺骗的风险,而机器人语音品种可能支持透明度。我们建议将可变性作为一种设计工具,同时对机器人的角色和能力保持透明。关键的问题是,哪种类型的变异有利于儿童的社会认知和语言学习,在哪个发展阶段,在什么样的背景下,取决于机器人的角色和角色。整合跨学科的见解,我们概述了研究变异性的具体维度如何影响理解、参与、语言学习的方向,以及发展有吸引力、道德透明和发展适当的声音互动。
{"title":"Robot speech: how variability matters for child-robot interactions.","authors":"Adriana Hanulíková, Nils Frederik Tolksdorf, Sarah Kapp","doi":"10.3389/frobt.2025.1725423","DOIUrl":"10.3389/frobt.2025.1725423","url":null,"abstract":"<p><p>Spoken language is one of the most powerful tools for humans to learn, exchange information, and build social relationships. An inherent feature of spoken language is large within- and between-speaker variation across linguistic levels, from sound acoustics to prosodic, lexical, syntactic, and pragmatic choices that differ from written language. Despite advancements in text-to-speech and language models used in social robots, synthetic speech lacks human-like variability. This limitation is especially critical in interactions with children, whose developmental needs require adaptive speech input and ethically responsible design. In child-robot interaction research, robot speech design has received less attention than appearance or multimodal features. We argue that speech variability in robots needs closer examination, considering both how humans adapt to robot speech and how robots could adjust to human speech. We discuss three tensions: (1) feasibility, because dynamic human speech variability is technically challenging to model; (2) desirability, because variability may both enhance and hinder learning, usability, and trust; and (3) ethics, because digital human-like speech risks deception, while robot speech varieties may support transparency. We suggest approaching variability as a design tool while being transparent about the robot's role and capabilities. The key question is which types of variation benefit children's socio-cognitive and language learning, at which developmental stage, in which context, depending on the robot's role and persona. Integrating insights across disciplines, we outline directions for studying how specific dimensions of variability affect comprehension, engagement, language learning, and for developing vocal interactivity that is engaging, ethically transparent, and developmentally appropriate.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1725423"},"PeriodicalIF":3.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12832417/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146067567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Robotics and AI
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1