首页 > 最新文献

Journal of Field Robotics最新文献

英文 中文
Cover Image, Volume 42, Number 8, December 2025 封面图片,42卷,第8期,2025年12月
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-11-17 DOI: 10.1002/rob.70115
Sehwa Chun, Hiroki Yokohata, Kenji Ohkuma, Shouhei Ito, Shinichiro Hirabayashi, Toshihiro Maki

The cover image is based on the article Tracking mooring lines of floating structures by an autonomous underwater vehicle by Sehwa Chun et al., 10.1002/rob.70076.

封面图像基于Sehwa Chun et al., 10.1002/rob.70076的文章《通过自主水下航行器跟踪浮动结构的系泊线》。
{"title":"Cover Image, Volume 42, Number 8, December 2025","authors":"Sehwa Chun,&nbsp;Hiroki Yokohata,&nbsp;Kenji Ohkuma,&nbsp;Shouhei Ito,&nbsp;Shinichiro Hirabayashi,&nbsp;Toshihiro Maki","doi":"10.1002/rob.70115","DOIUrl":"https://doi.org/10.1002/rob.70115","url":null,"abstract":"<p>The cover image is based on the article <i>Tracking mooring lines of floating structures by an autonomous underwater vehicle</i> by Sehwa Chun et al., 10.1002/rob.70076.\u0000\u0000 <figure>\u0000 <div><picture>\u0000 <source></source></picture><p></p>\u0000 </div>\u0000 </figure></p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 8","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.70115","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145530246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cover Image, Volume 42, Number 7, October 2025 封面图片,42卷,第7期,2025年10月
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-11-06 DOI: 10.1002/rob.70071

The image illustrates a jet-powered personal aerial system (PAV) performing vertical take-off and landing within a post-disaster urban environment. The system integrates five micro turbojet engines and a dual-degree-of-freedom vector control mechanism to achieve high maneuverability, stability, and fault tolerance. This work demonstrates the feasibility of a compact, human-scale VTOL platform capable of safe operation in complex field conditions, with potential applications in rescue and rapid response missions.

图为在灾后城市环境中执行垂直起降的喷气动力个人空中系统(PAV)。该系统集成了5台微型涡轮喷气发动机和一个双自由度矢量控制机构,实现了高机动性、稳定性和容错性。这项工作证明了一个紧凑的、人性化的垂直起降平台的可行性,该平台能够在复杂的野外条件下安全运行,在救援和快速响应任务中具有潜在的应用前景。
{"title":"Cover Image, Volume 42, Number 7, October 2025","authors":"","doi":"10.1002/rob.70071","DOIUrl":"https://doi.org/10.1002/rob.70071","url":null,"abstract":"<p>The image illustrates a jet-powered personal aerial system (PAV) performing vertical take-off and landing within a post-disaster urban environment. The system integrates five micro turbojet engines and a dual-degree-of-freedom vector control mechanism to achieve high maneuverability, stability, and fault tolerance. This work demonstrates the feasibility of a compact, human-scale VTOL platform capable of safe operation in complex field conditions, with potential applications in rescue and rapid response missions.\u0000\u0000 <figure>\u0000 <div><picture>\u0000 <source></source></picture><p></p>\u0000 </div>\u0000 </figure></p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 7","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.70071","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145469773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Internet of Robotic Things Evolution, Standards and Data Interoperability Best Practices for the Next Generation of Artificial Intelligence-Powered Systems 下一代人工智能驱动系统的机器人物联网演进、标准和数据互操作性最佳实践
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-10-05 DOI: 10.1002/rob.70063
Amelie Gyrard, Edison Pignaton de Freitas, Martin Serrano, Howard Li, Paulo Gonçalves, João Quintas, Ovidiu Vermesan, Alberto Olivares-Alarcos, Antonio Kung, Filippo Cavallo

The Internet of Robotic Things (IoRT) represents the rise of a new paradigm enabling robots to serve not only as autonomous units but also as intelligent interconnected entities that can interact, collaborate, and share information through the edge, cloud and other data networks. IoRT is a technological progress and the fusion of Robotics with the Internet of Things (IoT), artificial intelligence (AI), and edge-Computing, IoRT can benefit from the next-generation spatial web, Web 4.0 (the intelligent immersive knowledge Web), by enhancing data processing, situational awareness, and integration with immersive technologies, software-defined automation (SDA), and spatial computing technologies. Semantic Web and Web 4.0 technologies are becoming common in robotics projects for exchanging data and enabling data set interoperability. The main challenge is to upgrade how robotic things interact with each other and their environment in a more situation-aware fashion, enabling IoRT situation-aware capabilities. This paper reviews the definition of IoRT considering the latest developments in sensor technology and data management systems and uses a novel survey methodology to find, classify, and reuse robotic expertise and present it to the community and engineering experts. The survey is shared through the LOV4IoT-Robotics ontology catalog, which is available online. This catalog demonstrates how best practices for data sharing and data set interoperability are also used to extract robotic knowledge semi-automatically. A set of relevant semantic-enabled projects designed by domain experts that focused on extracting robotic knowledge was included.

机器人物联网(IoRT)代表了一种新范式的兴起,使机器人不仅可以作为自主单元,还可以作为智能互联实体,通过边缘、云和其他数据网络进行交互、协作和共享信息。IoRT是机器人技术与物联网(IoT)、人工智能(AI)和边缘计算的技术进步和融合,IoRT可以通过增强数据处理、态势感知以及与沉浸式技术、软件定义自动化(SDA)和空间计算技术的集成,从下一代空间网络web 4.0(智能沉浸式知识网络)中受益。语义Web和Web 4.0技术在机器人项目中变得越来越普遍,用于交换数据和启用数据集互操作性。主要的挑战是升级机器人之间的交互方式,并以一种更具态势感知的方式与环境进行交互,从而实现IoRT的态势感知能力。本文回顾了IoRT的定义,考虑到传感器技术和数据管理系统的最新发展,并使用一种新的调查方法来发现、分类和重用机器人专业知识,并将其呈现给社区和工程专家。该调查通过LOV4IoT-Robotics本体目录共享,该目录可在线获得。此目录演示了如何使用数据共享和数据集互操作性的最佳实践来半自动地提取机器人知识。包括由领域专家设计的一组相关的语义支持项目,这些项目专注于提取机器人知识。
{"title":"Internet of Robotic Things Evolution, Standards and Data Interoperability Best Practices for the Next Generation of Artificial Intelligence-Powered Systems","authors":"Amelie Gyrard,&nbsp;Edison Pignaton de Freitas,&nbsp;Martin Serrano,&nbsp;Howard Li,&nbsp;Paulo Gonçalves,&nbsp;João Quintas,&nbsp;Ovidiu Vermesan,&nbsp;Alberto Olivares-Alarcos,&nbsp;Antonio Kung,&nbsp;Filippo Cavallo","doi":"10.1002/rob.70063","DOIUrl":"https://doi.org/10.1002/rob.70063","url":null,"abstract":"<div>\u0000 \u0000 <p>The Internet of Robotic Things (IoRT) represents the rise of a new paradigm enabling robots to serve not only as autonomous units but also as intelligent interconnected entities that can interact, collaborate, and share information through the edge, cloud and other data networks. IoRT is a technological progress and the fusion of Robotics with the Internet of Things (IoT), artificial intelligence (AI), and edge-Computing, IoRT can benefit from the next-generation spatial web, Web 4.0 (the intelligent immersive knowledge Web), by enhancing data processing, situational awareness, and integration with immersive technologies, software-defined automation (SDA), and spatial computing technologies. Semantic Web and Web 4.0 technologies are becoming common in robotics projects for exchanging data and enabling data set interoperability. The main challenge is to upgrade how robotic things interact with each other and their environment in a more situation-aware fashion, enabling IoRT situation-aware capabilities. This paper reviews the definition of IoRT considering the latest developments in sensor technology and data management systems and uses a novel survey methodology to find, classify, and reuse robotic expertise and present it to the community and engineering experts. The survey is shared through the LOV4IoT-Robotics ontology catalog, which is available online. This catalog demonstrates how best practices for data sharing and data set interoperability are also used to extract robotic knowledge semi-automatically. A set of relevant semantic-enabled projects designed by domain experts that focused on extracting robotic knowledge was included.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 2","pages":"1193-1217"},"PeriodicalIF":5.2,"publicationDate":"2025-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146139385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robots Inspired by Inchworms: Structural Design and Applications 受尺蠖启发的机器人:结构设计与应用
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-10-05 DOI: 10.1002/rob.70087
Jingang Jiang, Yanxin Yu, Chuan Lin, Jianpeng Sun, Xuefeng Ma, Han Wang

Bionically designed crawling robots can adapt to complex environments and are widely used in military reconnaissance, environmental monitoring, and infrastructure inspection. As a typical organism, the unique crawling method of inchworms provides new ideas for robot design. This paper reviews the recent progress in the design and application of inchworm robots. First, the torso design of the inchworm robot is introduced, focusing on the application of external stimulus actuation modes such as light, magnetism, and humidity, as well as pressure, electric, and motor actuation modes. Second, the design of the head and tail structure of the inchworm robot is analyzed, and a variety of anchoring techniques, such as vacuum adsorption, magnetic adsorption, and electro-adhesion, are explored, and their respective advantages and disadvantages are discussed. Finally, this paper looks forward to the future application scenarios and development directions of inchworm robots, providing guidance for future research.

仿生设计的爬行机器人能够适应复杂的环境,广泛应用于军事侦察、环境监测和基础设施检查等领域。作为一种典型的生物,尺蠖独特的爬行方式为机器人的设计提供了新的思路。本文综述了近年来尺蠖机器人的设计和应用进展。首先,介绍了尺蠖机器人的躯干设计,重点介绍了光、磁、湿等外部刺激驱动模式以及压力、电动、电机驱动模式的应用。其次,对尺蠖机器人的头尾结构设计进行了分析,并对真空吸附、磁吸附、电粘附等多种锚固技术进行了探索,并对其各自的优缺点进行了讨论。最后,展望了尺蠖机器人未来的应用场景和发展方向,为今后的研究提供指导。
{"title":"Robots Inspired by Inchworms: Structural Design and Applications","authors":"Jingang Jiang,&nbsp;Yanxin Yu,&nbsp;Chuan Lin,&nbsp;Jianpeng Sun,&nbsp;Xuefeng Ma,&nbsp;Han Wang","doi":"10.1002/rob.70087","DOIUrl":"https://doi.org/10.1002/rob.70087","url":null,"abstract":"<div>\u0000 \u0000 <p>Bionically designed crawling robots can adapt to complex environments and are widely used in military reconnaissance, environmental monitoring, and infrastructure inspection. As a typical organism, the unique crawling method of inchworms provides new ideas for robot design. This paper reviews the recent progress in the design and application of inchworm robots. First, the torso design of the inchworm robot is introduced, focusing on the application of external stimulus actuation modes such as light, magnetism, and humidity, as well as pressure, electric, and motor actuation modes. Second, the design of the head and tail structure of the inchworm robot is analyzed, and a variety of anchoring techniques, such as vacuum adsorption, magnetic adsorption, and electro-adhesion, are explored, and their respective advantages and disadvantages are discussed. Finally, this paper looks forward to the future application scenarios and development directions of inchworm robots, providing guidance for future research.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 2","pages":"1218-1248"},"PeriodicalIF":5.2,"publicationDate":"2025-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146139364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tracking Mooring Lines of Floating Structures by an Autonomous Underwater Vehicle 自主水下航行器对浮式结构系泊线的跟踪
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-09-30 DOI: 10.1002/rob.70076
Sehwa Chun, Hiroki Yokohata, Kenji Ohkuma, Shouhei Ito, Shinichiro Hirabayashi, Toshihiro Maki

This study presents a novel method for tracking mooring lines of Floating Offshore Wind Turbines (FOWTs) using an Autonomous Underwater Vehicle (AUV) equipped with a tilt-controlled Multibeam Imaging Sonar (MBS). The proposed approach enables the AUV to estimate the 3D positions of mooring lines and safely track them in real-time, overcoming the limitations of traditional Remotely Operated Vehicle (ROV)-based inspections. By utilizing the tilt-controlled MBS and a pre-trained You Only Look Once (YOLO) model, the AUV identifies the mooring lines within sonar imagery and dynamically adjusts its velocities to maintain a safe distance during the inspection. A re-navigation method using the Rauch-Tung-Striebel (RTS) smoother enhances the AUV's localization accuracy by correcting its trajectory using post-processed data from sensors such as the Doppler Velocity Log (DVL), Super Short Baseline (SSBL) system, and Global Navigation Satellite System (GNSS). Additionally, reconstruction with catenary curve fitting is employed to estimate the mooring line's catenary parameters, offering insights into its potential deformation. The approach was validated using a hovering-type AUV, Tri-TON through both tank experiments and a sea experiment at an FOWT Hibiki in Kitakyushu, Japan. In the sea experiment, the AUV successfully tracked the mooring lines for 423 s, demonstrating its ability to estimate the position and catenary parameters of the mooring lines. The experimental results highlight areas for future improvement, particularly in enhancing localization accuracy, developing robust control algorithms, and expanding the analysis of mooring line conditions. This method lays the groundwork for future advancements in automated mooring line inspections and enables the integration of additional techniques, such as visual inspection.

本研究提出了一种利用配备倾斜控制多波束成像声纳(MBS)的自主水下航行器(AUV)跟踪浮式海上风力涡轮机(FOWTs)系泊线的新方法。提出的方法使AUV能够估计系泊线的三维位置并实时安全跟踪,克服了传统的基于远程操作车辆(ROV)的检测的局限性。通过使用倾斜控制的MBS和预先训练的You Only Look Once (YOLO)模型,AUV可以识别声纳图像中的系泊线,并动态调整其速度,以在检查期间保持安全距离。使用ruch - tung - striebel (RTS)平滑器的重新导航方法通过使用来自多普勒速度日志(DVL)、超短基线(SSBL)系统和全球导航卫星系统(GNSS)等传感器的后处理数据校正其轨迹来提高AUV的定位精度。此外,利用接触网曲线拟合进行重建来估计系泊线的接触网参数,从而深入了解其潜在的变形。该方法在日本北九州的FOWT Hibiki进行了坦克实验和海上实验,并使用悬停型AUV Tri-TON进行了验证。在海上实验中,AUV成功地跟踪了系泊线423 s,证明了其估计系泊线位置和悬链线参数的能力。实验结果强调了未来需要改进的领域,特别是在提高定位精度、开发鲁棒控制算法和扩展系泊线条件分析方面。这种方法为未来自动系泊线检查的发展奠定了基础,并能够集成其他技术,如目视检查。
{"title":"Tracking Mooring Lines of Floating Structures by an Autonomous Underwater Vehicle","authors":"Sehwa Chun,&nbsp;Hiroki Yokohata,&nbsp;Kenji Ohkuma,&nbsp;Shouhei Ito,&nbsp;Shinichiro Hirabayashi,&nbsp;Toshihiro Maki","doi":"10.1002/rob.70076","DOIUrl":"https://doi.org/10.1002/rob.70076","url":null,"abstract":"<p>This study presents a novel method for tracking mooring lines of Floating Offshore Wind Turbines (FOWTs) using an Autonomous Underwater Vehicle (AUV) equipped with a tilt-controlled Multibeam Imaging Sonar (MBS). The proposed approach enables the AUV to estimate the 3D positions of mooring lines and safely track them in real-time, overcoming the limitations of traditional Remotely Operated Vehicle (ROV)-based inspections. By utilizing the tilt-controlled MBS and a pre-trained You Only Look Once (YOLO) model, the AUV identifies the mooring lines within sonar imagery and dynamically adjusts its velocities to maintain a safe distance during the inspection. A re-navigation method using the Rauch-Tung-Striebel (RTS) smoother enhances the AUV's localization accuracy by correcting its trajectory using post-processed data from sensors such as the Doppler Velocity Log (DVL), Super Short Baseline (SSBL) system, and Global Navigation Satellite System (GNSS). Additionally, reconstruction with catenary curve fitting is employed to estimate the mooring line's catenary parameters, offering insights into its potential deformation. The approach was validated using a hovering-type AUV, Tri-TON through both tank experiments and a sea experiment at an FOWT Hibiki in Kitakyushu, Japan. In the sea experiment, the AUV successfully tracked the mooring lines for 423 s, demonstrating its ability to estimate the position and catenary parameters of the mooring lines. The experimental results highlight areas for future improvement, particularly in enhancing localization accuracy, developing robust control algorithms, and expanding the analysis of mooring line conditions. This method lays the groundwork for future advancements in automated mooring line inspections and enables the integration of additional techniques, such as visual inspection.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 8","pages":"4589-4608"},"PeriodicalIF":5.2,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.70076","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145530222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual–Acoustic-Based Framework for Online Inspection of Submerged Structures Using Autonomous Underwater Vehicles 基于视觉声学的自主水下航行器水下结构在线检测框架
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-09-30 DOI: 10.1002/rob.70075
Simone Tani, Francesco Ruscio, Andrea Caiti, Riccardo Costanzi

Underwater inspections of critical maritime infrastructures are still predominantly performed by human divers, exposing them to safety risks and yielding limited accuracy and repeatability. Autonomous Underwater Vehicles (AUVs) offer a promising alternative by removing humans from hazardous environments and enabling systematic, repeatable inspection operations. However, current AUV systems lack the necessary autonomy and typically rely on prior knowledge of the environment, limiting their applicability in real-world scenarios. This study presents a visual–acoustic-based framework aimed at overcoming these limitations and moving a step closer to fully autonomous inspection operations using AUVs. Designed for cost-effective deployment on vehicles equipped with a minimal sensor suite—including a stereo camera, an acoustic range sensor, an Inertial Measurement Unit with magnetometers, a pressure sensor, and a Global Positioning System (used only on the surface)—the framework enables inspection of unknown underwater structures without human intervention. The main contribution lies in the integration of perception and navigation into a unified architecture, allowing the AUV to leverage the exteroceptive sensor not only for scene understanding but also to support real-time control and mission adaptation. Perception data are combined with proprioceptive observations to adapt motion based on the environment, enabling autonomous management of the inspection mission and navigation with respect to the target. Furthermore, a mission manager coordinates all phases of the operation, from initial approach to structure-relative navigation and visual data acquisition. The proposed solution was validated through a sea trial, during which an AUV autonomously inspected a harbor pier. The framework computed control actions in quasi-real-time to maintain a predefined safety distance, inspection velocity, and payload orientation orthogonal to the scene. These outputs were used online as feedback within the AUV's control loop. The underwater robot completed the inspection, maintaining mission references and ensuring effective target coverage, good-quality optical data, and consistent three-dimensional reconstruction. Overall, this experimental validation demonstrates the feasibility of the proposed framework and marks a significant milestone toward the deployment of fully autonomous AUVs for real-world underwater inspection missions, even in the absence of prior knowledge about the structure.

关键海上基础设施的水下检查仍然主要由人类潜水员进行,这使其面临安全风险,准确性和可重复性有限。自主水下航行器(auv)提供了一种很有前途的替代方案,可以将人类从危险环境中移除,并实现系统、可重复的检查操作。然而,目前的AUV系统缺乏必要的自主性,通常依赖于对环境的先验知识,限制了它们在现实场景中的适用性。本研究提出了一种基于视觉声学的框架,旨在克服这些限制,并向使用auv的全自动检测操作迈进了一步。该框架是为经济高效地部署在配备最小传感器套件的车辆上而设计的,这些传感器套件包括一个立体摄像机、一个声学距离传感器、一个带磁力计的惯性测量单元、一个压力传感器和一个全球定位系统(仅在水面上使用),该框架可以在没有人为干预的情况下检查未知的水下结构。主要贡献在于将感知和导航集成到一个统一的架构中,允许AUV利用外部感知传感器不仅用于场景理解,还支持实时控制和任务适应。感知数据与本体感知观察相结合,以适应基于环境的运动,实现自主管理检查任务和针对目标的导航。此外,一名任务管理人员协调行动的所有阶段,从最初接近到与结构有关的导航和视觉数据获取。提出的解决方案通过海上试验得到了验证,在此期间,AUV自主检查了港口码头。该框架准实时计算控制动作,以保持预定义的安全距离、检查速度和与场景正交的有效载荷方向。这些输出在线用作AUV控制回路的反馈。水下机器人完成了检查,维护了任务参考,确保了有效的目标覆盖、高质量的光学数据和一致的三维重建。总的来说,这项实验验证证明了所提出框架的可行性,并标志着在现实世界水下探测任务中部署完全自主的auv的重要里程碑,即使没有关于结构的先验知识。
{"title":"Visual–Acoustic-Based Framework for Online Inspection of Submerged Structures Using Autonomous Underwater Vehicles","authors":"Simone Tani,&nbsp;Francesco Ruscio,&nbsp;Andrea Caiti,&nbsp;Riccardo Costanzi","doi":"10.1002/rob.70075","DOIUrl":"https://doi.org/10.1002/rob.70075","url":null,"abstract":"<div>\u0000 \u0000 <p>Underwater inspections of critical maritime infrastructures are still predominantly performed by human divers, exposing them to safety risks and yielding limited accuracy and repeatability. Autonomous Underwater Vehicles (AUVs) offer a promising alternative by removing humans from hazardous environments and enabling systematic, repeatable inspection operations. However, current AUV systems lack the necessary autonomy and typically rely on prior knowledge of the environment, limiting their applicability in real-world scenarios. This study presents a visual–acoustic-based framework aimed at overcoming these limitations and moving a step closer to fully autonomous inspection operations using AUVs. Designed for cost-effective deployment on vehicles equipped with a minimal sensor suite—including a stereo camera, an acoustic range sensor, an Inertial Measurement Unit with magnetometers, a pressure sensor, and a Global Positioning System (used only on the surface)—the framework enables inspection of unknown underwater structures without human intervention. The main contribution lies in the integration of perception and navigation into a unified architecture, allowing the AUV to leverage the exteroceptive sensor not only for scene understanding but also to support real-time control and mission adaptation. Perception data are combined with proprioceptive observations to adapt motion based on the environment, enabling autonomous management of the inspection mission and navigation with respect to the target. Furthermore, a mission manager coordinates all phases of the operation, from initial approach to structure-relative navigation and visual data acquisition. The proposed solution was validated through a sea trial, during which an AUV autonomously inspected a harbor pier. The framework computed control actions in quasi-real-time to maintain a predefined safety distance, inspection velocity, and payload orientation orthogonal to the scene. These outputs were used online as feedback within the AUV's control loop. The underwater robot completed the inspection, maintaining mission references and ensuring effective target coverage, good-quality optical data, and consistent three-dimensional reconstruction. Overall, this experimental validation demonstrates the feasibility of the proposed framework and marks a significant milestone toward the deployment of fully autonomous AUVs for real-world underwater inspection missions, even in the absence of prior knowledge about the structure.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 2","pages":"1152-1177"},"PeriodicalIF":5.2,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146136782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Long-Range Time-Correlated Single-Photon Counting Lidar 3D-Reconstruction From a Moving Ground Vehicle 移动地面车辆的远程时间相关单光子计数激光雷达三维重建
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-09-29 DOI: 10.1002/rob.70091
Hannes Ovrén, Max Holmberg, Markus Henriksson

Time-correlated single-photon counting (TCSPC) lidar enables high-resolution 3D imaging at kilometer range. While previous works have covered long range TCSPC 3D imaging from either stationary or airborne platforms, this is, to the best of our knowledge, the first attempt that use a moving ground vehicle. Fusing measurements taken at different locations and time imposes very high demands on knowledge of the sensor pose which are hard to achieve on such platforms. In this study we use simultaneous localization and mapping (SLAM) to correct for positioning errors, allowing us to create high fidelity point clouds using long range TCSPC lidar imaging on a moving ground vehicle. Our method uses inertial and GNSS sensors to get an initial estimate of the sensor motion, which is used to reconstruct parts of the scene over short time intervals. The initial motion estimate is then refined by adding constraints from local point cloud matching, and the refined estimate is used to construct the final point cloud of the target area. We describe the sensor system and integration of all sensors, as well as the field trial at which the system was evaluated. The proposed method is able to generate a high fidelity point cloud of a wooded target area from a distance of roughly 800 m while measuring from a moving vehicle. Compared to measurements from a stationary position we obtain better coverage of the target area, and increased ability to penetrate into the forest. However, some precision is lost in the reconstructed point cloud.

时间相关单光子计数(TCSPC)激光雷达可实现千米范围内的高分辨率3D成像。虽然以前的作品已经涵盖了远程TCSPC 3D成像从固定或机载平台,这是,据我们所知,第一次尝试使用移动地面车辆。在不同地点和时间进行的融合测量对传感器位姿的知识提出了很高的要求,这在此类平台上很难实现。在这项研究中,我们使用同步定位和测绘(SLAM)来纠正定位误差,使我们能够在移动的地面车辆上使用远程TCSPC激光雷达成像来创建高保真点云。我们的方法使用惯性和GNSS传感器来获得传感器运动的初始估计,用于在短时间间隔内重建部分场景。然后通过加入局部点云匹配的约束对初始运动估计进行细化,利用细化后的估计构造目标区域的最终点云。我们描述了传感器系统和所有传感器的集成,以及对系统进行评估的现场试验。所提出的方法能够在移动车辆上测量时,从大约800米的距离产生高保真的树木目标区域点云。与静止位置的测量相比,我们可以更好地覆盖目标区域,并提高深入森林的能力。然而,重建的点云有一定的精度损失。
{"title":"Long-Range Time-Correlated Single-Photon Counting Lidar 3D-Reconstruction From a Moving Ground Vehicle","authors":"Hannes Ovrén,&nbsp;Max Holmberg,&nbsp;Markus Henriksson","doi":"10.1002/rob.70091","DOIUrl":"https://doi.org/10.1002/rob.70091","url":null,"abstract":"<p>Time-correlated single-photon counting (TCSPC) lidar enables high-resolution 3D imaging at kilometer range. While previous works have covered long range TCSPC 3D imaging from either stationary or airborne platforms, this is, to the best of our knowledge, the first attempt that use a moving ground vehicle. Fusing measurements taken at different locations and time imposes very high demands on knowledge of the sensor pose which are hard to achieve on such platforms. In this study we use simultaneous localization and mapping (SLAM) to correct for positioning errors, allowing us to create high fidelity point clouds using long range TCSPC lidar imaging on a moving ground vehicle. Our method uses inertial and GNSS sensors to get an initial estimate of the sensor motion, which is used to reconstruct parts of the scene over short time intervals. The initial motion estimate is then refined by adding constraints from local point cloud matching, and the refined estimate is used to construct the final point cloud of the target area. We describe the sensor system and integration of all sensors, as well as the field trial at which the system was evaluated. The proposed method is able to generate a high fidelity point cloud of a wooded target area from a distance of roughly 800 m while measuring from a moving vehicle. Compared to measurements from a stationary position we obtain better coverage of the target area, and increased ability to penetrate into the forest. However, some precision is lost in the reconstructed point cloud.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 2","pages":"1178-1192"},"PeriodicalIF":5.2,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.70091","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146140029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Haptic Teleoperation in Extended Reality for Electric Vehicle Battery Disassembly Using Gaussian Mixture Regression 基于高斯混合回归的电动汽车电池拆卸扩展现实触觉遥操作
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-09-25 DOI: 10.1002/rob.70079
Alireza Rastegarpanah, Carmelo Mineo, Cesar Alan Contreras, Abdelaziz Shaarawy, Giovanni Paragliola, Rustam Stolkin

We present a comprehensive teleoperation framework for electric vehicle (EV) battery cell handling, integrating haptic feedback, extended reality (XR) visualization, and task-parameterized Gaussian mixture regression (TP-GMR) for adaptive, real-time trajectory generation. The system enables seamless switching between manual and autonomous operation through a variable autonomy mechanism, while constraint barrier functions (CBFs) enforce spatial safety constraints. A lightweight intent prediction module anticipates user deviation and precomputes corrective trajectories, reducing response time from 2.0 s to under 1 ms. The framework is implemented on an industrial KUKA robotic manipulator and validated in structured and real-world EV battery disassembly scenarios. Results show that combining XR and haptic feedback reduces task completion time by up to 48% and path deviation by 32%, compared to manual teleoperation without assistance. Predictive replanning improves continuity of force feedback and reduces unnecessary user motion. The integration of XR-based spatial computing, learning-from-demonstration, and real-time control enables safe, precise, and efficient manipulation in high-risk environments. This study demonstrates a scalable human-in-the-loop solution for battery recycling and other semi-structured tasks, where full automation is impractical. The proposed system significantly improves operator performance while maintaining safety and flexibility, marking a meaningful advancement in collaborative field robotics.

我们提出了一个全面的远程操作框架,用于电动汽车(EV)电池处理,集成了触觉反馈,扩展现实(XR)可视化和任务参数化高斯混合回归(TP-GMR),用于自适应实时轨迹生成。该系统通过可变自主机制实现手动和自主操作之间的无缝切换,而约束屏障功能(cbf)则强制执行空间安全约束。一个轻量级的意图预测模块可以预测用户偏差,并预先计算出纠正轨迹,将响应时间从2.0秒缩短到1毫秒以下。该框架在KUKA工业机器人机械手上实现,并在结构化和真实的电动汽车电池拆卸场景中进行了验证。结果表明,与没有辅助的手动遥操作相比,结合XR和触觉反馈可将任务完成时间缩短48%,路径偏差减少32%。预测性的重新规划提高了力反馈的连续性,减少了不必要的用户动作。基于xr的空间计算、从演示中学习和实时控制的集成可以在高风险环境中实现安全、精确和高效的操作。这项研究展示了一个可扩展的人在循环解决方案,用于电池回收和其他半结构化任务,在这些任务中,完全自动化是不切实际的。所提出的系统在保持安全性和灵活性的同时显著提高了操作员的性能,标志着协作领域机器人技术的重大进步。
{"title":"Haptic Teleoperation in Extended Reality for Electric Vehicle Battery Disassembly Using Gaussian Mixture Regression","authors":"Alireza Rastegarpanah,&nbsp;Carmelo Mineo,&nbsp;Cesar Alan Contreras,&nbsp;Abdelaziz Shaarawy,&nbsp;Giovanni Paragliola,&nbsp;Rustam Stolkin","doi":"10.1002/rob.70079","DOIUrl":"https://doi.org/10.1002/rob.70079","url":null,"abstract":"<p>We present a comprehensive teleoperation framework for electric vehicle (EV) battery cell handling, integrating haptic feedback, extended reality (XR) visualization, and task-parameterized Gaussian mixture regression (TP-GMR) for adaptive, real-time trajectory generation. The system enables seamless switching between manual and autonomous operation through a variable autonomy mechanism, while constraint barrier functions (CBFs) enforce spatial safety constraints. A lightweight intent prediction module anticipates user deviation and precomputes corrective trajectories, reducing response time from 2.0 s to under 1 ms. The framework is implemented on an industrial KUKA robotic manipulator and validated in structured and real-world EV battery disassembly scenarios. Results show that combining XR and haptic feedback reduces task completion time by up to 48% and path deviation by 32%, compared to manual teleoperation without assistance. Predictive replanning improves continuity of force feedback and reduces unnecessary user motion. The integration of XR-based spatial computing, learning-from-demonstration, and real-time control enables safe, precise, and efficient manipulation in high-risk environments. This study demonstrates a scalable human-in-the-loop solution for battery recycling and other semi-structured tasks, where full automation is impractical. The proposed system significantly improves operator performance while maintaining safety and flexibility, marking a meaningful advancement in collaborative field robotics.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 2","pages":"1130-1151"},"PeriodicalIF":5.2,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.70079","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146139971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Drill Pipe Positioning Method for Drilling Robot of Rockburst Prevention Based on Improved YOLOv8 基于改进YOLOv8的防岩爆钻井机器人钻杆定位方法
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-09-24 DOI: 10.1002/rob.70083
Xinhua Liu, Zhibin He, Dezheng Hua, Yunfei Zhu, Xiaoqiang Guo

Drilling robot of rockburst prevention is a key equipment for underground rock burst relief in coal mines, and drill pipe positioning is the basis and premise for realizing unmanned pressure relief operation. Based on the analysis of the characteristics and defects of the current positioning methods, a drill pipe positioning method based on improved YOLOv8 is proposed in this paper. Firstly, a drill pipe image data set simulated coal mine working state is collected and established. A fusion of deformable convolution and CBAM attention mechanism is proposed to enhance the image feature extraction capability. Moreover, the rotation decoupling head (RDH) and DP-YOLOv8 network structure are designed to predict the angle information of drill pipes with large aspect ratios. Finally, pixel-wise alignment of depth and color images is performed, and three-dimensional coordinates of the drill pipe are obtained through coordinate system transformation. Experimental results show that the proposed drill pipe positioning method achieves precision, recall, F1-score, and mAP50 of 96.19%, 96.47%, 96.33%, and 96.24%, respectively. The absolute error for drill pipe positioning is 0.015 m, with an average error of 0.009 m. The maximum angle error is 0.4°, with an average error of 0.225°.

防岩爆钻井机器人是煤矿井下岩爆卸压的关键设备,钻杆定位是实现无人化卸压作业的基础和前提。本文在分析现有定位方法的特点和缺陷的基础上,提出了一种基于改进YOLOv8的钻杆定位方法。首先,采集并建立模拟煤矿工作状态的钻杆图像数据集;为了提高图像特征提取能力,提出了一种融合变形卷积和CBAM注意机制的方法。设计了旋转解耦头(RDH)和DP-YOLOv8网络结构,用于预测大纵横比钻杆的角度信息。最后对深度图像和彩色图像进行逐像素对齐,通过坐标系变换得到钻杆的三维坐标。实验结果表明,该方法的定位精度为96.19%,召回率为96.47%,F1-score为96.33%,mAP50为96.24%。钻杆定位的绝对误差为0.015 m,平均误差为0.009 m。最大角度误差为0.4°,平均误差为0.225°。
{"title":"A Drill Pipe Positioning Method for Drilling Robot of Rockburst Prevention Based on Improved YOLOv8","authors":"Xinhua Liu,&nbsp;Zhibin He,&nbsp;Dezheng Hua,&nbsp;Yunfei Zhu,&nbsp;Xiaoqiang Guo","doi":"10.1002/rob.70083","DOIUrl":"https://doi.org/10.1002/rob.70083","url":null,"abstract":"<div>\u0000 \u0000 <p>Drilling robot of rockburst prevention is a key equipment for underground rock burst relief in coal mines, and drill pipe positioning is the basis and premise for realizing unmanned pressure relief operation. Based on the analysis of the characteristics and defects of the current positioning methods, a drill pipe positioning method based on improved YOLOv8 is proposed in this paper. Firstly, a drill pipe image data set simulated coal mine working state is collected and established. A fusion of deformable convolution and CBAM attention mechanism is proposed to enhance the image feature extraction capability. Moreover, the rotation decoupling head (RDH) and DP-YOLOv8 network structure are designed to predict the angle information of drill pipes with large aspect ratios. Finally, pixel-wise alignment of depth and color images is performed, and three-dimensional coordinates of the drill pipe are obtained through coordinate system transformation. Experimental results show that the proposed drill pipe positioning method achieves precision, recall, <i>F</i>1-score, and mAP50 of 96.19%, 96.47%, 96.33%, and 96.24%, respectively. The absolute error for drill pipe positioning is 0.015 m, with an average error of 0.009 m. The maximum angle error is 0.4°, with an average error of 0.225°.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 2","pages":"1111-1129"},"PeriodicalIF":5.2,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146139970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Friction Shock Absorbers and Reverse Thrust for Fast Multirotor Landing on High-Speed Vehicles 高速飞行器多旋翼快速着陆的摩擦减震器和逆推力
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-09-24 DOI: 10.1002/rob.70069
Isaac Tunney, John Bass, Alexis Lussier Desbiens

Typical landing gears of small uninhabited aerial vehicles (UAV) limit their capability to land on vehicles moving at more than 20–50 km/h due to high drag forces, high pitch angles and potentially high relative horizontal velocities. To enable landing at higher speeds, a combination of lightweight friction shock absorbers and reverse thrust was developed. This allows for rapid descents (i.e., 3 m/s) toward the vehicle while leveling at the last instant. Simulations show that the proposed system is (1) more robust at higher descent speeds contrary to traditional configurations, (2) can touchdown at almost any time during the leveling maneuver, thus reducing the timing constraints, and (3) is robust to many environmental, design and operational factors, maintaining a success rate above 80% up to 100 km/h. Compared to standard multirotors, this approach expands the possible state envelope at touchdown by a factor of 60. A total of 38 experimental trials were conducted where a drone successfully landed on a pickup truck moving at speeds ranging from 10 to 110 km/h. The increased touchdown envelope was shown to improve the multirotors' robustness to external disturbances such as winds and wind gusts, sensor errors and unpredictable motion of the ground vehicle. The increased landing capabilities also expand the flight envelope at the start of the leveling maneuver by a factor of 38 compared to a standard multirotor, thereby allowing the drone to fly in tougher conditions and initiate its leveling maneuver from a broader range of altitudes, vertical and horizontal velocities, as well as pitch angles and rates.

由于高阻力、高俯仰角和潜在的高相对水平速度,小型无人飞行器(UAV)的典型起落架限制了它们降落在以超过20-50公里/小时移动的车辆上的能力。为了能够以更高的速度着陆,开发了轻质摩擦减震器和反向推力的组合。这允许快速下降(例如,3米/秒)的车辆,同时在最后一刻的水平。仿真结果表明:(1)与传统配置相比,该系统在更高的下降速度下具有更强的鲁棒性;(2)在调平机动过程中几乎可以在任何时间着陆,从而减少了时间约束;(3)对许多环境、设计和操作因素具有鲁棒性,在100 km/h以内保持80%以上的成功率。与标准的多旋翼相比,这种方法在着陆时将可能的状态包络扩展了60倍。总共进行了38次试验,一架无人机成功降落在一辆以10至110公里/小时的速度行驶的皮卡上。结果表明,增加的着陆包线可以提高多旋翼对外界干扰(如风和阵风)、传感器误差和地面飞行器不可预测运动的鲁棒性。与标准多旋翼相比,增加的着陆能力还扩大了在调平机动开始时的飞行包线,从而使无人机能够在更严格的条件下飞行,并从更广泛的高度,垂直和水平速度,以及俯仰角和速率范围内启动其调平机动。
{"title":"Friction Shock Absorbers and Reverse Thrust for Fast Multirotor Landing on High-Speed Vehicles","authors":"Isaac Tunney,&nbsp;John Bass,&nbsp;Alexis Lussier Desbiens","doi":"10.1002/rob.70069","DOIUrl":"https://doi.org/10.1002/rob.70069","url":null,"abstract":"<p>Typical landing gears of small uninhabited aerial vehicles (UAV) limit their capability to land on vehicles moving at more than 20–50 km/h due to high drag forces, high pitch angles and potentially high relative horizontal velocities. To enable landing at higher speeds, a combination of lightweight friction shock absorbers and reverse thrust was developed. This allows for rapid descents (i.e., 3 m/s) toward the vehicle while leveling at the last instant. Simulations show that the proposed system is (1) more robust at higher descent speeds contrary to traditional configurations, (2) can touchdown at almost any time during the leveling maneuver, thus reducing the timing constraints, and (3) is robust to many environmental, design and operational factors, maintaining a success rate above 80% up to 100 km/h. Compared to standard multirotors, this approach expands the possible state envelope at touchdown by a factor of 60. A total of 38 experimental trials were conducted where a drone successfully landed on a pickup truck moving at speeds ranging from 10 to 110 km/h. The increased touchdown envelope was shown to improve the multirotors' robustness to external disturbances such as winds and wind gusts, sensor errors and unpredictable motion of the ground vehicle. The increased landing capabilities also expand the flight envelope at the start of the leveling maneuver by a factor of 38 compared to a standard multirotor, thereby allowing the drone to fly in tougher conditions and initiate its leveling maneuver from a broader range of altitudes, vertical and horizontal velocities, as well as pitch angles and rates.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 2","pages":"1068-1090"},"PeriodicalIF":5.2,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.70069","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146136715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Field Robotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1