Pub Date : 2024-03-04DOI: 10.1177/02783649241234364
Lu Chen, Lipeng Chen, Xiangchi Chen, Haojian Lu, Yu Zheng, Jun Wu, Yue Wang, Zhengyou Zhang, Rong Xiong
Physical human-robot interaction (pHRI) is widely needed in many fields, such as industrial manipulation, home services, and medical rehabilitation, and puts higher demands on the safety of robots. Due to the uncertainty of the working environment, the pHRI may receive unexpected impact interference, which affects the safety and smoothness of the task execution. The commonly used linear admittance control (L-AC) can cope well with high-frequency small-amplitude noise, but for medium-frequency high-intensity impact, the effect is not as good. Inspired by the solid-liquid phase change nature of shear-thickening fluid, we propose a shear-thickening fluid control (SFC) that can achieve both an easy human-robot collaboration and resistance to impact interference. The SFC’s stability, passivity, and phase trajectory are analyzed in detail, the frequency and time domain properties are quantified, and parameter constraints in discrete control and coupled stability conditions are provided. We conducted simulations to compare the frequency and time domain characteristics of L-AC, nonlinear admittance controller (N-AC), and SFC and validated their dynamic properties. In real-world experiments, we compared the performance of L-AC, N-AC, and SFC in both fixed and mobile manipulators. L-AC exhibits weak resistance to impact. N-AC can resist moderate impacts but not high-intensity ones and may exhibit self-excited oscillations. In contrast, SFC demonstrated superior impact resistance and maintained stable collaboration, enhancing comfort in cooperative water delivery tasks. Additionally, a case study was conducted in a factory setting, further affirming the SFC’s capability in facilitating human-robot collaborative manipulation and underscoring its potential in industrial applications.
{"title":"Compliance while resisting: A shear-thickening fluid controller for physical human-robot interaction","authors":"Lu Chen, Lipeng Chen, Xiangchi Chen, Haojian Lu, Yu Zheng, Jun Wu, Yue Wang, Zhengyou Zhang, Rong Xiong","doi":"10.1177/02783649241234364","DOIUrl":"https://doi.org/10.1177/02783649241234364","url":null,"abstract":"Physical human-robot interaction (pHRI) is widely needed in many fields, such as industrial manipulation, home services, and medical rehabilitation, and puts higher demands on the safety of robots. Due to the uncertainty of the working environment, the pHRI may receive unexpected impact interference, which affects the safety and smoothness of the task execution. The commonly used linear admittance control (L-AC) can cope well with high-frequency small-amplitude noise, but for medium-frequency high-intensity impact, the effect is not as good. Inspired by the solid-liquid phase change nature of shear-thickening fluid, we propose a shear-thickening fluid control (SFC) that can achieve both an easy human-robot collaboration and resistance to impact interference. The SFC’s stability, passivity, and phase trajectory are analyzed in detail, the frequency and time domain properties are quantified, and parameter constraints in discrete control and coupled stability conditions are provided. We conducted simulations to compare the frequency and time domain characteristics of L-AC, nonlinear admittance controller (N-AC), and SFC and validated their dynamic properties. In real-world experiments, we compared the performance of L-AC, N-AC, and SFC in both fixed and mobile manipulators. L-AC exhibits weak resistance to impact. N-AC can resist moderate impacts but not high-intensity ones and may exhibit self-excited oscillations. In contrast, SFC demonstrated superior impact resistance and maintained stable collaboration, enhancing comfort in cooperative water delivery tasks. Additionally, a case study was conducted in a factory setting, further affirming the SFC’s capability in facilitating human-robot collaborative manipulation and underscoring its potential in industrial applications.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"267 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140033702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-29DOI: 10.1177/02783649241235325
Vedant Bhandari, Tyson Govan Phillips, Peter Ross McAree
Simultaneous Localization and Mapping (SLAM) refers to the common requirement for autonomous platforms to estimate their pose and map their surroundings. There are many robust and real-time methods available for solving the SLAM problem. Most are divided into a front-end, which performs incremental pose estimation, and a back-end, which smooths and corrects the results. A low-drift front-end odometry solution is needed for robust and accurate back-end performance. Front-end methods employ various techniques, such as point cloud-to-point cloud (PC2PC) registration, key feature extraction and matching, and deep learning-based approaches. The front-end algorithms have become increasingly complex in the search for low-drift solutions and many now have large configuration parameter sets. It is desirable that the front-end algorithm should be inherently robust so that it does not need to be tuned by several, perhaps many, configuration parameters to achieve low drift in various environments. To address this issue, we propose Simple Mapping and Localization Estimation (SiMpLE), a front-end LiDAR-only odometry method that requires five low-sensitivity configurable parameters. SiMpLE is a scan-to-map point cloud registration algorithm that is straightforward to understand, configure, and implement. We evaluate SiMpLE using the KITTI, MulRan, UrbanNav, and a dataset created at the University of Queensland. SiMpLE performs among the top-ranked algorithms in the KITTI dataset and outperformed all prominent open-source approaches in the MulRan dataset whilst having the smallest configuration set. The UQ dataset also demonstrated accurate odometry with low-density point clouds using Velodyne VLP-16 and Livox Horizon LiDARs. SiMpLE is a front-end odometry solution that can be integrated with other sensing modalities and pose graph-based back-end methods for increased accuracy and long-term mapping. The lightweight and portable code for SiMpLE is available at: https://github.com/vb44/SiMpLE .
{"title":"Minimal configuration point cloud odometry and mapping","authors":"Vedant Bhandari, Tyson Govan Phillips, Peter Ross McAree","doi":"10.1177/02783649241235325","DOIUrl":"https://doi.org/10.1177/02783649241235325","url":null,"abstract":"Simultaneous Localization and Mapping (SLAM) refers to the common requirement for autonomous platforms to estimate their pose and map their surroundings. There are many robust and real-time methods available for solving the SLAM problem. Most are divided into a front-end, which performs incremental pose estimation, and a back-end, which smooths and corrects the results. A low-drift front-end odometry solution is needed for robust and accurate back-end performance. Front-end methods employ various techniques, such as point cloud-to-point cloud (PC2PC) registration, key feature extraction and matching, and deep learning-based approaches. The front-end algorithms have become increasingly complex in the search for low-drift solutions and many now have large configuration parameter sets. It is desirable that the front-end algorithm should be inherently robust so that it does not need to be tuned by several, perhaps many, configuration parameters to achieve low drift in various environments. To address this issue, we propose Simple Mapping and Localization Estimation (SiMpLE), a front-end LiDAR-only odometry method that requires five low-sensitivity configurable parameters. SiMpLE is a scan-to-map point cloud registration algorithm that is straightforward to understand, configure, and implement. We evaluate SiMpLE using the KITTI, MulRan, UrbanNav, and a dataset created at the University of Queensland. SiMpLE performs among the top-ranked algorithms in the KITTI dataset and outperformed all prominent open-source approaches in the MulRan dataset whilst having the smallest configuration set. The UQ dataset also demonstrated accurate odometry with low-density point clouds using Velodyne VLP-16 and Livox Horizon LiDARs. SiMpLE is a front-end odometry solution that can be integrated with other sensing modalities and pose graph-based back-end methods for increased accuracy and long-term mapping. The lightweight and portable code for SiMpLE is available at: https://github.com/vb44/SiMpLE .","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"38 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140003277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-22DOI: 10.1177/02783649241235670
Atieh Merikh Nejadasl, Jihad Achaoui, Ilias El Makrini, Greet Van De Perre, Tom Verstraten, Bram Vanderborght
This paper focuses on improving the ergonomics of industrial workers. It addresses the critical implications of poor ergonomics, which can lead to musculoskeletal disorders over time. A novel methodology for a path-planning algorithm designed for human–robot collaboration was introduced to tackle this challenge. The algorithm’s essential contribution lies in determining the most ergonomic path for a robot to guide a human’s hand during task execution, facilitating a transition toward an optimized body configuration. The algorithm effectively charts the ergonomic path by adopting a Cartesian path-planning approach and employing the cell decomposition method. The methodology was implemented on a dataset of ten individuals, representing a diverse group of male and female subjects aged between 20 and 35, with one participant being left-handed. The algorithm was applied to three different activities: “stacking an item,” “taking an object from a shelf,” and “assembling an object by sitting over a table.” The results demonstrated a significant improvement in the REBA score (as a measure of ergonomics condition) of the individuals after applying the algorithm. This outcome reinforces the efficacy of the methodology in enhancing the ergonomics of industrial workers. Furthermore, the study compared the performance of A* with three heuristic functions against Dijkstra’s algorithm, aiming to identify the most effective approach for achieving optimal ergonomic paths in human–robot collaboration. The findings revealed that A* with a specific heuristic function surpassed Dijkstra’s algorithm, underscoring its superiority in this context. The findings highlight the potential for optimizing human–robot collaboration and offer practical implications for designing more efficient industrial work environments.
{"title":"Ergonomically optimized path-planning for industrial human–robot collaboration","authors":"Atieh Merikh Nejadasl, Jihad Achaoui, Ilias El Makrini, Greet Van De Perre, Tom Verstraten, Bram Vanderborght","doi":"10.1177/02783649241235670","DOIUrl":"https://doi.org/10.1177/02783649241235670","url":null,"abstract":"This paper focuses on improving the ergonomics of industrial workers. It addresses the critical implications of poor ergonomics, which can lead to musculoskeletal disorders over time. A novel methodology for a path-planning algorithm designed for human–robot collaboration was introduced to tackle this challenge. The algorithm’s essential contribution lies in determining the most ergonomic path for a robot to guide a human’s hand during task execution, facilitating a transition toward an optimized body configuration. The algorithm effectively charts the ergonomic path by adopting a Cartesian path-planning approach and employing the cell decomposition method. The methodology was implemented on a dataset of ten individuals, representing a diverse group of male and female subjects aged between 20 and 35, with one participant being left-handed. The algorithm was applied to three different activities: “stacking an item,” “taking an object from a shelf,” and “assembling an object by sitting over a table.” The results demonstrated a significant improvement in the REBA score (as a measure of ergonomics condition) of the individuals after applying the algorithm. This outcome reinforces the efficacy of the methodology in enhancing the ergonomics of industrial workers. Furthermore, the study compared the performance of A* with three heuristic functions against Dijkstra’s algorithm, aiming to identify the most effective approach for achieving optimal ergonomic paths in human–robot collaboration. The findings revealed that A* with a specific heuristic function surpassed Dijkstra’s algorithm, underscoring its superiority in this context. The findings highlight the potential for optimizing human–robot collaboration and offer practical implications for designing more efficient industrial work environments.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139956912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-21DOI: 10.1177/02783649231218719
Alexander Toedtheide, Edmundo Pozo Fortunić, Johannes Kühn, Elisabeth Jensen, Sami Haddadin
In this work we introduce a new type of human-inspired upper-limb prostheses. The Artificial Neuromuscular Prosthesis (ANP) imitates the human neuromuscular system in the sense of its compliance, backdrivability, natural motion, proprioceptive sensing, and kinesthetics. To realize this challenging goal, we introduce a novel human-inspired and simulation-based development paradigm to design the prosthesis mechatronics in correspondence to the human body. The ANP provides body awareness, contact awareness, and human-like contact response, realized via floating base rigid-body models, disturbance observers, and joint impedance control—concepts known from established state-of-the-art robotics. The ANP mechatronics is characterized by a four degrees of freedom (dof) torque-controlled human-like kinematics, a tendon-driven 2-dof wrist, and spatial orientation sensing at a weight of 1.7 kg (without hand and battery). The paper deals with the rigorous mathematical modeling, control, design and evaluation of this device type along initially defined requirements within a single prototype only. The proposed systemic and grasping capabilities are verified under laboratory conditions by an unimpaired user. Future work will increase the technology readiness level of the next generation device, where human studies with impaired users will be done.
{"title":"A transhumeral prosthesis with an artificial neuromuscular system: Sim2real-guided design, modeling, and control","authors":"Alexander Toedtheide, Edmundo Pozo Fortunić, Johannes Kühn, Elisabeth Jensen, Sami Haddadin","doi":"10.1177/02783649231218719","DOIUrl":"https://doi.org/10.1177/02783649231218719","url":null,"abstract":"In this work we introduce a new type of human-inspired upper-limb prostheses. The Artificial Neuromuscular Prosthesis (ANP) imitates the human neuromuscular system in the sense of its compliance, backdrivability, natural motion, proprioceptive sensing, and kinesthetics. To realize this challenging goal, we introduce a novel human-inspired and simulation-based development paradigm to design the prosthesis mechatronics in correspondence to the human body. The ANP provides body awareness, contact awareness, and human-like contact response, realized via floating base rigid-body models, disturbance observers, and joint impedance control—concepts known from established state-of-the-art robotics. The ANP mechatronics is characterized by a four degrees of freedom (dof) torque-controlled human-like kinematics, a tendon-driven 2-dof wrist, and spatial orientation sensing at a weight of 1.7 kg (without hand and battery). The paper deals with the rigorous mathematical modeling, control, design and evaluation of this device type along initially defined requirements within a single prototype only. The proposed systemic and grasping capabilities are verified under laboratory conditions by an unimpaired user. Future work will increase the technology readiness level of the next generation device, where human studies with impaired users will be done.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"49 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139956850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-15DOI: 10.1177/02783649241230098
Rowan Border, Jonathan D. Gammell
High-quality observations of the real world are crucial for a variety of applications, including producing 3D printed replicas of small-scale scenes and conducting inspections of large-scale infrastructure. These 3D observations are commonly obtained by combining multiple sensor measurements from different views. Guiding the selection of suitable views is known as the Next Best View (NBV) planning problem. Most NBV approaches reason about measurements using rigid data structures (e.g., surface meshes or voxel grids). This simplifies next best view selection but can be computationally expensive, reduces real-world fidelity and couples the selection of a next best view with the final data processing. This paper presents the Surface Edge Explorer (SEE), a NBV approach that selects new observations directly from previous sensor measurements without requiring rigid data structures. SEE uses measurement density to propose next best views that increase coverage of insufficiently observed surfaces while avoiding potential occlusions. Statistical results from simulated experiments show that SEE can attain similar or better surface coverage with less observation time and travel distance than evaluated volumetric approaches on both small- and large-scale scenes. Real-world experiments demonstrate SEE autonomously observing a deer statue using a 3D sensor affixed to a robotic arm.
对现实世界的高质量观测对于各种应用都至关重要,包括制作小规模场景的 3D 打印复制品和对大型基础设施进行检测。这些三维观测数据通常是通过组合来自不同视角的多个传感器测量数据而获得的。指导选择合适的视图被称为下一个最佳视图(NBV)规划问题。大多数 NBV 方法使用刚性数据结构(如曲面网格或体素网格)对测量结果进行推理。这种方法简化了下一个最佳视图的选择,但计算成本高,降低了真实世界的保真度,并将下一个最佳视图的选择与最终数据处理结合在一起。本文介绍了表面边缘资源管理器(SEE),这是一种 NBV 方法,可直接从以前的传感器测量结果中选择新的观测值,而无需刚性数据结构。SEE 利用测量密度提出下一个最佳视图,以增加对观测不足的表面的覆盖范围,同时避免潜在的遮挡。模拟实验的统计结果表明,在小型和大型场景中,与已评估过的体积方法相比,SEE 能以更短的观测时间和移动距离实现类似或更好的表面覆盖。真实世界的实验演示了 SEE 使用机器人手臂上的 3D 传感器自主观测鹿的雕像。
{"title":"The surface edge explorer (SEE): A measurement-direct approach to next best view planning","authors":"Rowan Border, Jonathan D. Gammell","doi":"10.1177/02783649241230098","DOIUrl":"https://doi.org/10.1177/02783649241230098","url":null,"abstract":"High-quality observations of the real world are crucial for a variety of applications, including producing 3D printed replicas of small-scale scenes and conducting inspections of large-scale infrastructure. These 3D observations are commonly obtained by combining multiple sensor measurements from different views. Guiding the selection of suitable views is known as the Next Best View (NBV) planning problem. Most NBV approaches reason about measurements using rigid data structures (e.g., surface meshes or voxel grids). This simplifies next best view selection but can be computationally expensive, reduces real-world fidelity and couples the selection of a next best view with the final data processing. This paper presents the Surface Edge Explorer (SEE), a NBV approach that selects new observations directly from previous sensor measurements without requiring rigid data structures. SEE uses measurement density to propose next best views that increase coverage of insufficiently observed surfaces while avoiding potential occlusions. Statistical results from simulated experiments show that SEE can attain similar or better surface coverage with less observation time and travel distance than evaluated volumetric approaches on both small- and large-scale scenes. Real-world experiments demonstrate SEE autonomously observing a deer statue using a 3D sensor affixed to a robotic arm.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139945638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-14DOI: 10.1177/02783649241231600
Richard L. Pratt, A. Petruska
Using steerable needles to enable course correction and curved trajectories can improve surgical outcomes in numerous clinical interventions including electrode placement for deep brain stimulation. In this work, a physically motivated kinematic model for an actively steered magnetic-tipped needle is used in closed-loop control to perform insertion trajectories. The applied control law is derived using the Lyapunov redesign. Simulation results show this control method to be accurate for a wide range of conditions including randomized target trajectories. Control is performed experimentally in a brain tissue phantom for both initial position offset recovery and curved trajectories. Converged error results average 0.52 mm from target trajectory. Simulation results demonstrate the robustness of the control implementation, while experimental results exceed the accuracy required for the target application, encouraging future use in a clinical setting. Beyond needle insertion, this work has implications in general vehicle steering, as this model and control can apply to systems with similar kinematics such as boats and wheeled vehicles that could benefit from a relaxed slip constraint.
{"title":"Magnetic needle steering control using Lyapunov redesign","authors":"Richard L. Pratt, A. Petruska","doi":"10.1177/02783649241231600","DOIUrl":"https://doi.org/10.1177/02783649241231600","url":null,"abstract":"Using steerable needles to enable course correction and curved trajectories can improve surgical outcomes in numerous clinical interventions including electrode placement for deep brain stimulation. In this work, a physically motivated kinematic model for an actively steered magnetic-tipped needle is used in closed-loop control to perform insertion trajectories. The applied control law is derived using the Lyapunov redesign. Simulation results show this control method to be accurate for a wide range of conditions including randomized target trajectories. Control is performed experimentally in a brain tissue phantom for both initial position offset recovery and curved trajectories. Converged error results average 0.52 mm from target trajectory. Simulation results demonstrate the robustness of the control implementation, while experimental results exceed the accuracy required for the target application, encouraging future use in a clinical setting. Beyond needle insertion, this work has implications in general vehicle steering, as this model and control can apply to systems with similar kinematics such as boats and wheeled vehicles that could benefit from a relaxed slip constraint.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"2 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139777852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-14DOI: 10.1177/02783649241227245
Christian Brommer, Alessandro Fornasier, Martin Scheiber, J. Delaune, R. Brockers, J. Steinbrener, Stephan Weiss
For real-world applications, autonomous mobile robotic platforms must be capable of navigating safely in a multitude of different and dynamic environments with accurate and robust localization being a key prerequisite. To support further research in this domain, we present the INSANE datasets (Increased Number of Sensors for developing Advanced and Novel Estimators)—a collection of versatile Micro Aerial Vehicle (MAV) datasets for cross-environment localization. The datasets provide various scenarios with multiple stages of difficulty for localization methods. These scenarios range from trajectories in the controlled environment of an indoor motion capture facility, to experiments where the vehicle performs an outdoor maneuver and transitions into a building, requiring changes of sensor modalities, up to purely outdoor flight maneuvers in a challenging Mars analog environment to simulate scenarios which current and future Mars helicopters would need to perform. The presented work aims to provide data that reflects real-world scenarios and sensor effects. The extensive sensor suite includes various sensor categories, including multiple Inertial Measurement Units (IMUs) and cameras. Sensor data is made available as unprocessed measurements and each dataset provides highly accurate ground truth, including the outdoor experiments where a dual Real-Time Kinematic (RTK) Global Navigation Satellite System (GNSS) setup provides sub-degree and centimeter accuracy (1-sigma). The sensor suite also includes a dedicated high-rate IMU to capture all the vibration dynamics of the vehicle during flight to support research on novel machine learning-based sensor signal enhancement methods for improved localization. The datasets and post-processing tools are available at: https://sst.aau.at/cns/datasets/insane-dataset/
{"title":"The INSANE dataset: Large number of sensors for challenging UAV flights in Mars analog, outdoor, and out-/indoor transition scenarios","authors":"Christian Brommer, Alessandro Fornasier, Martin Scheiber, J. Delaune, R. Brockers, J. Steinbrener, Stephan Weiss","doi":"10.1177/02783649241227245","DOIUrl":"https://doi.org/10.1177/02783649241227245","url":null,"abstract":"For real-world applications, autonomous mobile robotic platforms must be capable of navigating safely in a multitude of different and dynamic environments with accurate and robust localization being a key prerequisite. To support further research in this domain, we present the INSANE datasets (Increased Number of Sensors for developing Advanced and Novel Estimators)—a collection of versatile Micro Aerial Vehicle (MAV) datasets for cross-environment localization. The datasets provide various scenarios with multiple stages of difficulty for localization methods. These scenarios range from trajectories in the controlled environment of an indoor motion capture facility, to experiments where the vehicle performs an outdoor maneuver and transitions into a building, requiring changes of sensor modalities, up to purely outdoor flight maneuvers in a challenging Mars analog environment to simulate scenarios which current and future Mars helicopters would need to perform. The presented work aims to provide data that reflects real-world scenarios and sensor effects. The extensive sensor suite includes various sensor categories, including multiple Inertial Measurement Units (IMUs) and cameras. Sensor data is made available as unprocessed measurements and each dataset provides highly accurate ground truth, including the outdoor experiments where a dual Real-Time Kinematic (RTK) Global Navigation Satellite System (GNSS) setup provides sub-degree and centimeter accuracy (1-sigma). The sensor suite also includes a dedicated high-rate IMU to capture all the vibration dynamics of the vehicle during flight to support research on novel machine learning-based sensor signal enhancement methods for improved localization. The datasets and post-processing tools are available at: https://sst.aau.at/cns/datasets/insane-dataset/","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"94 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139838720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-14DOI: 10.1177/02783649241231600
Richard L. Pratt, A. Petruska
Using steerable needles to enable course correction and curved trajectories can improve surgical outcomes in numerous clinical interventions including electrode placement for deep brain stimulation. In this work, a physically motivated kinematic model for an actively steered magnetic-tipped needle is used in closed-loop control to perform insertion trajectories. The applied control law is derived using the Lyapunov redesign. Simulation results show this control method to be accurate for a wide range of conditions including randomized target trajectories. Control is performed experimentally in a brain tissue phantom for both initial position offset recovery and curved trajectories. Converged error results average 0.52 mm from target trajectory. Simulation results demonstrate the robustness of the control implementation, while experimental results exceed the accuracy required for the target application, encouraging future use in a clinical setting. Beyond needle insertion, this work has implications in general vehicle steering, as this model and control can apply to systems with similar kinematics such as boats and wheeled vehicles that could benefit from a relaxed slip constraint.
{"title":"Magnetic needle steering control using Lyapunov redesign","authors":"Richard L. Pratt, A. Petruska","doi":"10.1177/02783649241231600","DOIUrl":"https://doi.org/10.1177/02783649241231600","url":null,"abstract":"Using steerable needles to enable course correction and curved trajectories can improve surgical outcomes in numerous clinical interventions including electrode placement for deep brain stimulation. In this work, a physically motivated kinematic model for an actively steered magnetic-tipped needle is used in closed-loop control to perform insertion trajectories. The applied control law is derived using the Lyapunov redesign. Simulation results show this control method to be accurate for a wide range of conditions including randomized target trajectories. Control is performed experimentally in a brain tissue phantom for both initial position offset recovery and curved trajectories. Converged error results average 0.52 mm from target trajectory. Simulation results demonstrate the robustness of the control implementation, while experimental results exceed the accuracy required for the target application, encouraging future use in a clinical setting. Beyond needle insertion, this work has implications in general vehicle steering, as this model and control can apply to systems with similar kinematics such as boats and wheeled vehicles that could benefit from a relaxed slip constraint.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"72 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139837614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-14DOI: 10.1177/02783649241227245
Christian Brommer, Alessandro Fornasier, Martin Scheiber, J. Delaune, R. Brockers, J. Steinbrener, Stephan Weiss
For real-world applications, autonomous mobile robotic platforms must be capable of navigating safely in a multitude of different and dynamic environments with accurate and robust localization being a key prerequisite. To support further research in this domain, we present the INSANE datasets (Increased Number of Sensors for developing Advanced and Novel Estimators)—a collection of versatile Micro Aerial Vehicle (MAV) datasets for cross-environment localization. The datasets provide various scenarios with multiple stages of difficulty for localization methods. These scenarios range from trajectories in the controlled environment of an indoor motion capture facility, to experiments where the vehicle performs an outdoor maneuver and transitions into a building, requiring changes of sensor modalities, up to purely outdoor flight maneuvers in a challenging Mars analog environment to simulate scenarios which current and future Mars helicopters would need to perform. The presented work aims to provide data that reflects real-world scenarios and sensor effects. The extensive sensor suite includes various sensor categories, including multiple Inertial Measurement Units (IMUs) and cameras. Sensor data is made available as unprocessed measurements and each dataset provides highly accurate ground truth, including the outdoor experiments where a dual Real-Time Kinematic (RTK) Global Navigation Satellite System (GNSS) setup provides sub-degree and centimeter accuracy (1-sigma). The sensor suite also includes a dedicated high-rate IMU to capture all the vibration dynamics of the vehicle during flight to support research on novel machine learning-based sensor signal enhancement methods for improved localization. The datasets and post-processing tools are available at: https://sst.aau.at/cns/datasets/insane-dataset/
{"title":"The INSANE dataset: Large number of sensors for challenging UAV flights in Mars analog, outdoor, and out-/indoor transition scenarios","authors":"Christian Brommer, Alessandro Fornasier, Martin Scheiber, J. Delaune, R. Brockers, J. Steinbrener, Stephan Weiss","doi":"10.1177/02783649241227245","DOIUrl":"https://doi.org/10.1177/02783649241227245","url":null,"abstract":"For real-world applications, autonomous mobile robotic platforms must be capable of navigating safely in a multitude of different and dynamic environments with accurate and robust localization being a key prerequisite. To support further research in this domain, we present the INSANE datasets (Increased Number of Sensors for developing Advanced and Novel Estimators)—a collection of versatile Micro Aerial Vehicle (MAV) datasets for cross-environment localization. The datasets provide various scenarios with multiple stages of difficulty for localization methods. These scenarios range from trajectories in the controlled environment of an indoor motion capture facility, to experiments where the vehicle performs an outdoor maneuver and transitions into a building, requiring changes of sensor modalities, up to purely outdoor flight maneuvers in a challenging Mars analog environment to simulate scenarios which current and future Mars helicopters would need to perform. The presented work aims to provide data that reflects real-world scenarios and sensor effects. The extensive sensor suite includes various sensor categories, including multiple Inertial Measurement Units (IMUs) and cameras. Sensor data is made available as unprocessed measurements and each dataset provides highly accurate ground truth, including the outdoor experiments where a dual Real-Time Kinematic (RTK) Global Navigation Satellite System (GNSS) setup provides sub-degree and centimeter accuracy (1-sigma). The sensor suite also includes a dedicated high-rate IMU to capture all the vibration dynamics of the vehicle during flight to support research on novel machine learning-based sensor signal enhancement methods for improved localization. The datasets and post-processing tools are available at: https://sst.aau.at/cns/datasets/insane-dataset/","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"41 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139778890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-13DOI: 10.1177/02783649241230562
Phani Teja Singamaneni, Pilar Bachiller-Burgos, Luis J. Manso, Anaís Garrell, Alberto Sanfeliu, Anne Spalanzani, Rachid Alami
Socially aware robot navigation is gaining popularity with the increase in delivery and assistive robots. The research is further fueled by a need for socially aware navigation skills in autonomous vehicles to move safely and appropriately in spaces shared with humans. Although most of these are ground robots, drones are also entering the field. In this paper, we present a literature survey of the works on socially aware robot navigation in the past 10 years. We propose four different faceted taxonomies to navigate the literature and examine the field from four different perspectives. Through the taxonomic review, we discuss the current research directions and the extending scope of applications in various domains. Further, we put forward a list of current research opportunities and present a discussion on possible future challenges that are likely to emerge in the field.
{"title":"A survey on socially aware robot navigation: Taxonomy and future challenges","authors":"Phani Teja Singamaneni, Pilar Bachiller-Burgos, Luis J. Manso, Anaís Garrell, Alberto Sanfeliu, Anne Spalanzani, Rachid Alami","doi":"10.1177/02783649241230562","DOIUrl":"https://doi.org/10.1177/02783649241230562","url":null,"abstract":"Socially aware robot navigation is gaining popularity with the increase in delivery and assistive robots. The research is further fueled by a need for socially aware navigation skills in autonomous vehicles to move safely and appropriately in spaces shared with humans. Although most of these are ground robots, drones are also entering the field. In this paper, we present a literature survey of the works on socially aware robot navigation in the past 10 years. We propose four different faceted taxonomies to navigate the literature and examine the field from four different perspectives. Through the taxonomic review, we discuss the current research directions and the extending scope of applications in various domains. Further, we put forward a list of current research opportunities and present a discussion on possible future challenges that are likely to emerge in the field.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139945630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}