首页 > 最新文献

Journal of Field Robotics最新文献

英文 中文
Cover Image, Volume 42, Number 8, December 2025 封面图片,42卷,第8期,2025年12月
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-11-17 DOI: 10.1002/rob.70115
Sehwa Chun, Hiroki Yokohata, Kenji Ohkuma, Shouhei Ito, Shinichiro Hirabayashi, Toshihiro Maki

The cover image is based on the article Tracking mooring lines of floating structures by an autonomous underwater vehicle by Sehwa Chun et al., 10.1002/rob.70076.

封面图像基于Sehwa Chun et al., 10.1002/rob.70076的文章《通过自主水下航行器跟踪浮动结构的系泊线》。
{"title":"Cover Image, Volume 42, Number 8, December 2025","authors":"Sehwa Chun,&nbsp;Hiroki Yokohata,&nbsp;Kenji Ohkuma,&nbsp;Shouhei Ito,&nbsp;Shinichiro Hirabayashi,&nbsp;Toshihiro Maki","doi":"10.1002/rob.70115","DOIUrl":"https://doi.org/10.1002/rob.70115","url":null,"abstract":"<p>The cover image is based on the article <i>Tracking mooring lines of floating structures by an autonomous underwater vehicle</i> by Sehwa Chun et al., 10.1002/rob.70076.\u0000\u0000 <figure>\u0000 <div><picture>\u0000 <source></source></picture><p></p>\u0000 </div>\u0000 </figure></p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 8","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.70115","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145530246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cover Image, Volume 42, Number 7, October 2025 封面图片,42卷,第7期,2025年10月
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-11-06 DOI: 10.1002/rob.70071

The image illustrates a jet-powered personal aerial system (PAV) performing vertical take-off and landing within a post-disaster urban environment. The system integrates five micro turbojet engines and a dual-degree-of-freedom vector control mechanism to achieve high maneuverability, stability, and fault tolerance. This work demonstrates the feasibility of a compact, human-scale VTOL platform capable of safe operation in complex field conditions, with potential applications in rescue and rapid response missions.

图为在灾后城市环境中执行垂直起降的喷气动力个人空中系统(PAV)。该系统集成了5台微型涡轮喷气发动机和一个双自由度矢量控制机构,实现了高机动性、稳定性和容错性。这项工作证明了一个紧凑的、人性化的垂直起降平台的可行性,该平台能够在复杂的野外条件下安全运行,在救援和快速响应任务中具有潜在的应用前景。
{"title":"Cover Image, Volume 42, Number 7, October 2025","authors":"","doi":"10.1002/rob.70071","DOIUrl":"https://doi.org/10.1002/rob.70071","url":null,"abstract":"<p>The image illustrates a jet-powered personal aerial system (PAV) performing vertical take-off and landing within a post-disaster urban environment. The system integrates five micro turbojet engines and a dual-degree-of-freedom vector control mechanism to achieve high maneuverability, stability, and fault tolerance. This work demonstrates the feasibility of a compact, human-scale VTOL platform capable of safe operation in complex field conditions, with potential applications in rescue and rapid response missions.\u0000\u0000 <figure>\u0000 <div><picture>\u0000 <source></source></picture><p></p>\u0000 </div>\u0000 </figure></p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 7","pages":""},"PeriodicalIF":5.2,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.70071","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145469773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Internet of Robotic Things Evolution, Standards and Data Interoperability Best Practices for the Next Generation of Artificial Intelligence-Powered Systems 下一代人工智能驱动系统的机器人物联网演进、标准和数据互操作性最佳实践
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-10-05 DOI: 10.1002/rob.70063
Amelie Gyrard, Edison Pignaton de Freitas, Martin Serrano, Howard Li, Paulo Gonçalves, João Quintas, Ovidiu Vermesan, Alberto Olivares-Alarcos, Antonio Kung, Filippo Cavallo

The Internet of Robotic Things (IoRT) represents the rise of a new paradigm enabling robots to serve not only as autonomous units but also as intelligent interconnected entities that can interact, collaborate, and share information through the edge, cloud and other data networks. IoRT is a technological progress and the fusion of Robotics with the Internet of Things (IoT), artificial intelligence (AI), and edge-Computing, IoRT can benefit from the next-generation spatial web, Web 4.0 (the intelligent immersive knowledge Web), by enhancing data processing, situational awareness, and integration with immersive technologies, software-defined automation (SDA), and spatial computing technologies. Semantic Web and Web 4.0 technologies are becoming common in robotics projects for exchanging data and enabling data set interoperability. The main challenge is to upgrade how robotic things interact with each other and their environment in a more situation-aware fashion, enabling IoRT situation-aware capabilities. This paper reviews the definition of IoRT considering the latest developments in sensor technology and data management systems and uses a novel survey methodology to find, classify, and reuse robotic expertise and present it to the community and engineering experts. The survey is shared through the LOV4IoT-Robotics ontology catalog, which is available online. This catalog demonstrates how best practices for data sharing and data set interoperability are also used to extract robotic knowledge semi-automatically. A set of relevant semantic-enabled projects designed by domain experts that focused on extracting robotic knowledge was included.

{"title":"Internet of Robotic Things Evolution, Standards and Data Interoperability Best Practices for the Next Generation of Artificial Intelligence-Powered Systems","authors":"Amelie Gyrard,&nbsp;Edison Pignaton de Freitas,&nbsp;Martin Serrano,&nbsp;Howard Li,&nbsp;Paulo Gonçalves,&nbsp;João Quintas,&nbsp;Ovidiu Vermesan,&nbsp;Alberto Olivares-Alarcos,&nbsp;Antonio Kung,&nbsp;Filippo Cavallo","doi":"10.1002/rob.70063","DOIUrl":"https://doi.org/10.1002/rob.70063","url":null,"abstract":"<div>\u0000 \u0000 <p>The Internet of Robotic Things (IoRT) represents the rise of a new paradigm enabling robots to serve not only as autonomous units but also as intelligent interconnected entities that can interact, collaborate, and share information through the edge, cloud and other data networks. IoRT is a technological progress and the fusion of Robotics with the Internet of Things (IoT), artificial intelligence (AI), and edge-Computing, IoRT can benefit from the next-generation spatial web, Web 4.0 (the intelligent immersive knowledge Web), by enhancing data processing, situational awareness, and integration with immersive technologies, software-defined automation (SDA), and spatial computing technologies. Semantic Web and Web 4.0 technologies are becoming common in robotics projects for exchanging data and enabling data set interoperability. The main challenge is to upgrade how robotic things interact with each other and their environment in a more situation-aware fashion, enabling IoRT situation-aware capabilities. This paper reviews the definition of IoRT considering the latest developments in sensor technology and data management systems and uses a novel survey methodology to find, classify, and reuse robotic expertise and present it to the community and engineering experts. The survey is shared through the LOV4IoT-Robotics ontology catalog, which is available online. This catalog demonstrates how best practices for data sharing and data set interoperability are also used to extract robotic knowledge semi-automatically. A set of relevant semantic-enabled projects designed by domain experts that focused on extracting robotic knowledge was included.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 2","pages":"1193-1217"},"PeriodicalIF":5.2,"publicationDate":"2025-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146139385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robots Inspired by Inchworms: Structural Design and Applications 受尺蠖启发的机器人:结构设计与应用
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-10-05 DOI: 10.1002/rob.70087
Jingang Jiang, Yanxin Yu, Chuan Lin, Jianpeng Sun, Xuefeng Ma, Han Wang

Bionically designed crawling robots can adapt to complex environments and are widely used in military reconnaissance, environmental monitoring, and infrastructure inspection. As a typical organism, the unique crawling method of inchworms provides new ideas for robot design. This paper reviews the recent progress in the design and application of inchworm robots. First, the torso design of the inchworm robot is introduced, focusing on the application of external stimulus actuation modes such as light, magnetism, and humidity, as well as pressure, electric, and motor actuation modes. Second, the design of the head and tail structure of the inchworm robot is analyzed, and a variety of anchoring techniques, such as vacuum adsorption, magnetic adsorption, and electro-adhesion, are explored, and their respective advantages and disadvantages are discussed. Finally, this paper looks forward to the future application scenarios and development directions of inchworm robots, providing guidance for future research.

{"title":"Robots Inspired by Inchworms: Structural Design and Applications","authors":"Jingang Jiang,&nbsp;Yanxin Yu,&nbsp;Chuan Lin,&nbsp;Jianpeng Sun,&nbsp;Xuefeng Ma,&nbsp;Han Wang","doi":"10.1002/rob.70087","DOIUrl":"https://doi.org/10.1002/rob.70087","url":null,"abstract":"<div>\u0000 \u0000 <p>Bionically designed crawling robots can adapt to complex environments and are widely used in military reconnaissance, environmental monitoring, and infrastructure inspection. As a typical organism, the unique crawling method of inchworms provides new ideas for robot design. This paper reviews the recent progress in the design and application of inchworm robots. First, the torso design of the inchworm robot is introduced, focusing on the application of external stimulus actuation modes such as light, magnetism, and humidity, as well as pressure, electric, and motor actuation modes. Second, the design of the head and tail structure of the inchworm robot is analyzed, and a variety of anchoring techniques, such as vacuum adsorption, magnetic adsorption, and electro-adhesion, are explored, and their respective advantages and disadvantages are discussed. Finally, this paper looks forward to the future application scenarios and development directions of inchworm robots, providing guidance for future research.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 2","pages":"1218-1248"},"PeriodicalIF":5.2,"publicationDate":"2025-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146139364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tracking Mooring Lines of Floating Structures by an Autonomous Underwater Vehicle 自主水下航行器对浮式结构系泊线的跟踪
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-09-30 DOI: 10.1002/rob.70076
Sehwa Chun, Hiroki Yokohata, Kenji Ohkuma, Shouhei Ito, Shinichiro Hirabayashi, Toshihiro Maki

This study presents a novel method for tracking mooring lines of Floating Offshore Wind Turbines (FOWTs) using an Autonomous Underwater Vehicle (AUV) equipped with a tilt-controlled Multibeam Imaging Sonar (MBS). The proposed approach enables the AUV to estimate the 3D positions of mooring lines and safely track them in real-time, overcoming the limitations of traditional Remotely Operated Vehicle (ROV)-based inspections. By utilizing the tilt-controlled MBS and a pre-trained You Only Look Once (YOLO) model, the AUV identifies the mooring lines within sonar imagery and dynamically adjusts its velocities to maintain a safe distance during the inspection. A re-navigation method using the Rauch-Tung-Striebel (RTS) smoother enhances the AUV's localization accuracy by correcting its trajectory using post-processed data from sensors such as the Doppler Velocity Log (DVL), Super Short Baseline (SSBL) system, and Global Navigation Satellite System (GNSS). Additionally, reconstruction with catenary curve fitting is employed to estimate the mooring line's catenary parameters, offering insights into its potential deformation. The approach was validated using a hovering-type AUV, Tri-TON through both tank experiments and a sea experiment at an FOWT Hibiki in Kitakyushu, Japan. In the sea experiment, the AUV successfully tracked the mooring lines for 423 s, demonstrating its ability to estimate the position and catenary parameters of the mooring lines. The experimental results highlight areas for future improvement, particularly in enhancing localization accuracy, developing robust control algorithms, and expanding the analysis of mooring line conditions. This method lays the groundwork for future advancements in automated mooring line inspections and enables the integration of additional techniques, such as visual inspection.

本研究提出了一种利用配备倾斜控制多波束成像声纳(MBS)的自主水下航行器(AUV)跟踪浮式海上风力涡轮机(FOWTs)系泊线的新方法。提出的方法使AUV能够估计系泊线的三维位置并实时安全跟踪,克服了传统的基于远程操作车辆(ROV)的检测的局限性。通过使用倾斜控制的MBS和预先训练的You Only Look Once (YOLO)模型,AUV可以识别声纳图像中的系泊线,并动态调整其速度,以在检查期间保持安全距离。使用ruch - tung - striebel (RTS)平滑器的重新导航方法通过使用来自多普勒速度日志(DVL)、超短基线(SSBL)系统和全球导航卫星系统(GNSS)等传感器的后处理数据校正其轨迹来提高AUV的定位精度。此外,利用接触网曲线拟合进行重建来估计系泊线的接触网参数,从而深入了解其潜在的变形。该方法在日本北九州的FOWT Hibiki进行了坦克实验和海上实验,并使用悬停型AUV Tri-TON进行了验证。在海上实验中,AUV成功地跟踪了系泊线423 s,证明了其估计系泊线位置和悬链线参数的能力。实验结果强调了未来需要改进的领域,特别是在提高定位精度、开发鲁棒控制算法和扩展系泊线条件分析方面。这种方法为未来自动系泊线检查的发展奠定了基础,并能够集成其他技术,如目视检查。
{"title":"Tracking Mooring Lines of Floating Structures by an Autonomous Underwater Vehicle","authors":"Sehwa Chun,&nbsp;Hiroki Yokohata,&nbsp;Kenji Ohkuma,&nbsp;Shouhei Ito,&nbsp;Shinichiro Hirabayashi,&nbsp;Toshihiro Maki","doi":"10.1002/rob.70076","DOIUrl":"https://doi.org/10.1002/rob.70076","url":null,"abstract":"<p>This study presents a novel method for tracking mooring lines of Floating Offshore Wind Turbines (FOWTs) using an Autonomous Underwater Vehicle (AUV) equipped with a tilt-controlled Multibeam Imaging Sonar (MBS). The proposed approach enables the AUV to estimate the 3D positions of mooring lines and safely track them in real-time, overcoming the limitations of traditional Remotely Operated Vehicle (ROV)-based inspections. By utilizing the tilt-controlled MBS and a pre-trained You Only Look Once (YOLO) model, the AUV identifies the mooring lines within sonar imagery and dynamically adjusts its velocities to maintain a safe distance during the inspection. A re-navigation method using the Rauch-Tung-Striebel (RTS) smoother enhances the AUV's localization accuracy by correcting its trajectory using post-processed data from sensors such as the Doppler Velocity Log (DVL), Super Short Baseline (SSBL) system, and Global Navigation Satellite System (GNSS). Additionally, reconstruction with catenary curve fitting is employed to estimate the mooring line's catenary parameters, offering insights into its potential deformation. The approach was validated using a hovering-type AUV, Tri-TON through both tank experiments and a sea experiment at an FOWT Hibiki in Kitakyushu, Japan. In the sea experiment, the AUV successfully tracked the mooring lines for 423 s, demonstrating its ability to estimate the position and catenary parameters of the mooring lines. The experimental results highlight areas for future improvement, particularly in enhancing localization accuracy, developing robust control algorithms, and expanding the analysis of mooring line conditions. This method lays the groundwork for future advancements in automated mooring line inspections and enables the integration of additional techniques, such as visual inspection.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"42 8","pages":"4589-4608"},"PeriodicalIF":5.2,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.70076","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145530222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual–Acoustic-Based Framework for Online Inspection of Submerged Structures Using Autonomous Underwater Vehicles 基于视觉声学的自主水下航行器水下结构在线检测框架
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-09-30 DOI: 10.1002/rob.70075
Simone Tani, Francesco Ruscio, Andrea Caiti, Riccardo Costanzi

Underwater inspections of critical maritime infrastructures are still predominantly performed by human divers, exposing them to safety risks and yielding limited accuracy and repeatability. Autonomous Underwater Vehicles (AUVs) offer a promising alternative by removing humans from hazardous environments and enabling systematic, repeatable inspection operations. However, current AUV systems lack the necessary autonomy and typically rely on prior knowledge of the environment, limiting their applicability in real-world scenarios. This study presents a visual–acoustic-based framework aimed at overcoming these limitations and moving a step closer to fully autonomous inspection operations using AUVs. Designed for cost-effective deployment on vehicles equipped with a minimal sensor suite—including a stereo camera, an acoustic range sensor, an Inertial Measurement Unit with magnetometers, a pressure sensor, and a Global Positioning System (used only on the surface)—the framework enables inspection of unknown underwater structures without human intervention. The main contribution lies in the integration of perception and navigation into a unified architecture, allowing the AUV to leverage the exteroceptive sensor not only for scene understanding but also to support real-time control and mission adaptation. Perception data are combined with proprioceptive observations to adapt motion based on the environment, enabling autonomous management of the inspection mission and navigation with respect to the target. Furthermore, a mission manager coordinates all phases of the operation, from initial approach to structure-relative navigation and visual data acquisition. The proposed solution was validated through a sea trial, during which an AUV autonomously inspected a harbor pier. The framework computed control actions in quasi-real-time to maintain a predefined safety distance, inspection velocity, and payload orientation orthogonal to the scene. These outputs were used online as feedback within the AUV's control loop. The underwater robot completed the inspection, maintaining mission references and ensuring effective target coverage, good-quality optical data, and consistent three-dimensional reconstruction. Overall, this experimental validation demonstrates the feasibility of the proposed framework and marks a significant milestone toward the deployment of fully autonomous AUVs for real-world underwater inspection missions, even in the absence of prior knowledge about the structure.

{"title":"Visual–Acoustic-Based Framework for Online Inspection of Submerged Structures Using Autonomous Underwater Vehicles","authors":"Simone Tani,&nbsp;Francesco Ruscio,&nbsp;Andrea Caiti,&nbsp;Riccardo Costanzi","doi":"10.1002/rob.70075","DOIUrl":"https://doi.org/10.1002/rob.70075","url":null,"abstract":"<div>\u0000 \u0000 <p>Underwater inspections of critical maritime infrastructures are still predominantly performed by human divers, exposing them to safety risks and yielding limited accuracy and repeatability. Autonomous Underwater Vehicles (AUVs) offer a promising alternative by removing humans from hazardous environments and enabling systematic, repeatable inspection operations. However, current AUV systems lack the necessary autonomy and typically rely on prior knowledge of the environment, limiting their applicability in real-world scenarios. This study presents a visual–acoustic-based framework aimed at overcoming these limitations and moving a step closer to fully autonomous inspection operations using AUVs. Designed for cost-effective deployment on vehicles equipped with a minimal sensor suite—including a stereo camera, an acoustic range sensor, an Inertial Measurement Unit with magnetometers, a pressure sensor, and a Global Positioning System (used only on the surface)—the framework enables inspection of unknown underwater structures without human intervention. The main contribution lies in the integration of perception and navigation into a unified architecture, allowing the AUV to leverage the exteroceptive sensor not only for scene understanding but also to support real-time control and mission adaptation. Perception data are combined with proprioceptive observations to adapt motion based on the environment, enabling autonomous management of the inspection mission and navigation with respect to the target. Furthermore, a mission manager coordinates all phases of the operation, from initial approach to structure-relative navigation and visual data acquisition. The proposed solution was validated through a sea trial, during which an AUV autonomously inspected a harbor pier. The framework computed control actions in quasi-real-time to maintain a predefined safety distance, inspection velocity, and payload orientation orthogonal to the scene. These outputs were used online as feedback within the AUV's control loop. The underwater robot completed the inspection, maintaining mission references and ensuring effective target coverage, good-quality optical data, and consistent three-dimensional reconstruction. Overall, this experimental validation demonstrates the feasibility of the proposed framework and marks a significant milestone toward the deployment of fully autonomous AUVs for real-world underwater inspection missions, even in the absence of prior knowledge about the structure.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 2","pages":"1152-1177"},"PeriodicalIF":5.2,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146136782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Long-Range Time-Correlated Single-Photon Counting Lidar 3D-Reconstruction From a Moving Ground Vehicle 移动地面车辆的远程时间相关单光子计数激光雷达三维重建
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-09-29 DOI: 10.1002/rob.70091
Hannes Ovrén, Max Holmberg, Markus Henriksson

Time-correlated single-photon counting (TCSPC) lidar enables high-resolution 3D imaging at kilometer range. While previous works have covered long range TCSPC 3D imaging from either stationary or airborne platforms, this is, to the best of our knowledge, the first attempt that use a moving ground vehicle. Fusing measurements taken at different locations and time imposes very high demands on knowledge of the sensor pose which are hard to achieve on such platforms. In this study we use simultaneous localization and mapping (SLAM) to correct for positioning errors, allowing us to create high fidelity point clouds using long range TCSPC lidar imaging on a moving ground vehicle. Our method uses inertial and GNSS sensors to get an initial estimate of the sensor motion, which is used to reconstruct parts of the scene over short time intervals. The initial motion estimate is then refined by adding constraints from local point cloud matching, and the refined estimate is used to construct the final point cloud of the target area. We describe the sensor system and integration of all sensors, as well as the field trial at which the system was evaluated. The proposed method is able to generate a high fidelity point cloud of a wooded target area from a distance of roughly 800 m while measuring from a moving vehicle. Compared to measurements from a stationary position we obtain better coverage of the target area, and increased ability to penetrate into the forest. However, some precision is lost in the reconstructed point cloud.

{"title":"Long-Range Time-Correlated Single-Photon Counting Lidar 3D-Reconstruction From a Moving Ground Vehicle","authors":"Hannes Ovrén,&nbsp;Max Holmberg,&nbsp;Markus Henriksson","doi":"10.1002/rob.70091","DOIUrl":"https://doi.org/10.1002/rob.70091","url":null,"abstract":"<p>Time-correlated single-photon counting (TCSPC) lidar enables high-resolution 3D imaging at kilometer range. While previous works have covered long range TCSPC 3D imaging from either stationary or airborne platforms, this is, to the best of our knowledge, the first attempt that use a moving ground vehicle. Fusing measurements taken at different locations and time imposes very high demands on knowledge of the sensor pose which are hard to achieve on such platforms. In this study we use simultaneous localization and mapping (SLAM) to correct for positioning errors, allowing us to create high fidelity point clouds using long range TCSPC lidar imaging on a moving ground vehicle. Our method uses inertial and GNSS sensors to get an initial estimate of the sensor motion, which is used to reconstruct parts of the scene over short time intervals. The initial motion estimate is then refined by adding constraints from local point cloud matching, and the refined estimate is used to construct the final point cloud of the target area. We describe the sensor system and integration of all sensors, as well as the field trial at which the system was evaluated. The proposed method is able to generate a high fidelity point cloud of a wooded target area from a distance of roughly 800 m while measuring from a moving vehicle. Compared to measurements from a stationary position we obtain better coverage of the target area, and increased ability to penetrate into the forest. However, some precision is lost in the reconstructed point cloud.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 2","pages":"1178-1192"},"PeriodicalIF":5.2,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.70091","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146140029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Haptic Teleoperation in Extended Reality for Electric Vehicle Battery Disassembly Using Gaussian Mixture Regression 基于高斯混合回归的电动汽车电池拆卸扩展现实触觉遥操作
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-09-25 DOI: 10.1002/rob.70079
Alireza Rastegarpanah, Carmelo Mineo, Cesar Alan Contreras, Abdelaziz Shaarawy, Giovanni Paragliola, Rustam Stolkin

We present a comprehensive teleoperation framework for electric vehicle (EV) battery cell handling, integrating haptic feedback, extended reality (XR) visualization, and task-parameterized Gaussian mixture regression (TP-GMR) for adaptive, real-time trajectory generation. The system enables seamless switching between manual and autonomous operation through a variable autonomy mechanism, while constraint barrier functions (CBFs) enforce spatial safety constraints. A lightweight intent prediction module anticipates user deviation and precomputes corrective trajectories, reducing response time from 2.0 s to under 1 ms. The framework is implemented on an industrial KUKA robotic manipulator and validated in structured and real-world EV battery disassembly scenarios. Results show that combining XR and haptic feedback reduces task completion time by up to 48% and path deviation by 32%, compared to manual teleoperation without assistance. Predictive replanning improves continuity of force feedback and reduces unnecessary user motion. The integration of XR-based spatial computing, learning-from-demonstration, and real-time control enables safe, precise, and efficient manipulation in high-risk environments. This study demonstrates a scalable human-in-the-loop solution for battery recycling and other semi-structured tasks, where full automation is impractical. The proposed system significantly improves operator performance while maintaining safety and flexibility, marking a meaningful advancement in collaborative field robotics.

{"title":"Haptic Teleoperation in Extended Reality for Electric Vehicle Battery Disassembly Using Gaussian Mixture Regression","authors":"Alireza Rastegarpanah,&nbsp;Carmelo Mineo,&nbsp;Cesar Alan Contreras,&nbsp;Abdelaziz Shaarawy,&nbsp;Giovanni Paragliola,&nbsp;Rustam Stolkin","doi":"10.1002/rob.70079","DOIUrl":"https://doi.org/10.1002/rob.70079","url":null,"abstract":"<p>We present a comprehensive teleoperation framework for electric vehicle (EV) battery cell handling, integrating haptic feedback, extended reality (XR) visualization, and task-parameterized Gaussian mixture regression (TP-GMR) for adaptive, real-time trajectory generation. The system enables seamless switching between manual and autonomous operation through a variable autonomy mechanism, while constraint barrier functions (CBFs) enforce spatial safety constraints. A lightweight intent prediction module anticipates user deviation and precomputes corrective trajectories, reducing response time from 2.0 s to under 1 ms. The framework is implemented on an industrial KUKA robotic manipulator and validated in structured and real-world EV battery disassembly scenarios. Results show that combining XR and haptic feedback reduces task completion time by up to 48% and path deviation by 32%, compared to manual teleoperation without assistance. Predictive replanning improves continuity of force feedback and reduces unnecessary user motion. The integration of XR-based spatial computing, learning-from-demonstration, and real-time control enables safe, precise, and efficient manipulation in high-risk environments. This study demonstrates a scalable human-in-the-loop solution for battery recycling and other semi-structured tasks, where full automation is impractical. The proposed system significantly improves operator performance while maintaining safety and flexibility, marking a meaningful advancement in collaborative field robotics.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 2","pages":"1130-1151"},"PeriodicalIF":5.2,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.70079","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146139971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Drill Pipe Positioning Method for Drilling Robot of Rockburst Prevention Based on Improved YOLOv8 基于改进YOLOv8的防岩爆钻井机器人钻杆定位方法
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-09-24 DOI: 10.1002/rob.70083
Xinhua Liu, Zhibin He, Dezheng Hua, Yunfei Zhu, Xiaoqiang Guo

Drilling robot of rockburst prevention is a key equipment for underground rock burst relief in coal mines, and drill pipe positioning is the basis and premise for realizing unmanned pressure relief operation. Based on the analysis of the characteristics and defects of the current positioning methods, a drill pipe positioning method based on improved YOLOv8 is proposed in this paper. Firstly, a drill pipe image data set simulated coal mine working state is collected and established. A fusion of deformable convolution and CBAM attention mechanism is proposed to enhance the image feature extraction capability. Moreover, the rotation decoupling head (RDH) and DP-YOLOv8 network structure are designed to predict the angle information of drill pipes with large aspect ratios. Finally, pixel-wise alignment of depth and color images is performed, and three-dimensional coordinates of the drill pipe are obtained through coordinate system transformation. Experimental results show that the proposed drill pipe positioning method achieves precision, recall, F1-score, and mAP50 of 96.19%, 96.47%, 96.33%, and 96.24%, respectively. The absolute error for drill pipe positioning is 0.015 m, with an average error of 0.009 m. The maximum angle error is 0.4°, with an average error of 0.225°.

{"title":"A Drill Pipe Positioning Method for Drilling Robot of Rockburst Prevention Based on Improved YOLOv8","authors":"Xinhua Liu,&nbsp;Zhibin He,&nbsp;Dezheng Hua,&nbsp;Yunfei Zhu,&nbsp;Xiaoqiang Guo","doi":"10.1002/rob.70083","DOIUrl":"https://doi.org/10.1002/rob.70083","url":null,"abstract":"<div>\u0000 \u0000 <p>Drilling robot of rockburst prevention is a key equipment for underground rock burst relief in coal mines, and drill pipe positioning is the basis and premise for realizing unmanned pressure relief operation. Based on the analysis of the characteristics and defects of the current positioning methods, a drill pipe positioning method based on improved YOLOv8 is proposed in this paper. Firstly, a drill pipe image data set simulated coal mine working state is collected and established. A fusion of deformable convolution and CBAM attention mechanism is proposed to enhance the image feature extraction capability. Moreover, the rotation decoupling head (RDH) and DP-YOLOv8 network structure are designed to predict the angle information of drill pipes with large aspect ratios. Finally, pixel-wise alignment of depth and color images is performed, and three-dimensional coordinates of the drill pipe are obtained through coordinate system transformation. Experimental results show that the proposed drill pipe positioning method achieves precision, recall, <i>F</i>1-score, and mAP50 of 96.19%, 96.47%, 96.33%, and 96.24%, respectively. The absolute error for drill pipe positioning is 0.015 m, with an average error of 0.009 m. The maximum angle error is 0.4°, with an average error of 0.225°.</p>\u0000 </div>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 2","pages":"1111-1129"},"PeriodicalIF":5.2,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146139970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Friction Shock Absorbers and Reverse Thrust for Fast Multirotor Landing on High-Speed Vehicles 高速飞行器多旋翼快速着陆的摩擦减震器和逆推力
IF 5.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2025-09-24 DOI: 10.1002/rob.70069
Isaac Tunney, John Bass, Alexis Lussier Desbiens

Typical landing gears of small uninhabited aerial vehicles (UAV) limit their capability to land on vehicles moving at more than 20–50 km/h due to high drag forces, high pitch angles and potentially high relative horizontal velocities. To enable landing at higher speeds, a combination of lightweight friction shock absorbers and reverse thrust was developed. This allows for rapid descents (i.e., 3 m/s) toward the vehicle while leveling at the last instant. Simulations show that the proposed system is (1) more robust at higher descent speeds contrary to traditional configurations, (2) can touchdown at almost any time during the leveling maneuver, thus reducing the timing constraints, and (3) is robust to many environmental, design and operational factors, maintaining a success rate above 80% up to 100 km/h. Compared to standard multirotors, this approach expands the possible state envelope at touchdown by a factor of 60. A total of 38 experimental trials were conducted where a drone successfully landed on a pickup truck moving at speeds ranging from 10 to 110 km/h. The increased touchdown envelope was shown to improve the multirotors' robustness to external disturbances such as winds and wind gusts, sensor errors and unpredictable motion of the ground vehicle. The increased landing capabilities also expand the flight envelope at the start of the leveling maneuver by a factor of 38 compared to a standard multirotor, thereby allowing the drone to fly in tougher conditions and initiate its leveling maneuver from a broader range of altitudes, vertical and horizontal velocities, as well as pitch angles and rates.

{"title":"Friction Shock Absorbers and Reverse Thrust for Fast Multirotor Landing on High-Speed Vehicles","authors":"Isaac Tunney,&nbsp;John Bass,&nbsp;Alexis Lussier Desbiens","doi":"10.1002/rob.70069","DOIUrl":"https://doi.org/10.1002/rob.70069","url":null,"abstract":"<p>Typical landing gears of small uninhabited aerial vehicles (UAV) limit their capability to land on vehicles moving at more than 20–50 km/h due to high drag forces, high pitch angles and potentially high relative horizontal velocities. To enable landing at higher speeds, a combination of lightweight friction shock absorbers and reverse thrust was developed. This allows for rapid descents (i.e., 3 m/s) toward the vehicle while leveling at the last instant. Simulations show that the proposed system is (1) more robust at higher descent speeds contrary to traditional configurations, (2) can touchdown at almost any time during the leveling maneuver, thus reducing the timing constraints, and (3) is robust to many environmental, design and operational factors, maintaining a success rate above 80% up to 100 km/h. Compared to standard multirotors, this approach expands the possible state envelope at touchdown by a factor of 60. A total of 38 experimental trials were conducted where a drone successfully landed on a pickup truck moving at speeds ranging from 10 to 110 km/h. The increased touchdown envelope was shown to improve the multirotors' robustness to external disturbances such as winds and wind gusts, sensor errors and unpredictable motion of the ground vehicle. The increased landing capabilities also expand the flight envelope at the start of the leveling maneuver by a factor of 38 compared to a standard multirotor, thereby allowing the drone to fly in tougher conditions and initiate its leveling maneuver from a broader range of altitudes, vertical and horizontal velocities, as well as pitch angles and rates.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"43 2","pages":"1068-1090"},"PeriodicalIF":5.2,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.70069","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146136715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Field Robotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1