Pub Date : 2023-06-01DOI: 10.1177/02783649231180366
Jacob Hernandez Sanchez, Walid Amanhoud, A. Billard, M. Bouri
Robotic surgery is a promising direction to improve surgeons and assistants’ daily life with respect to conventional surgery. In this work, we propose solo laparoscopic surgery in which two robotic arms, controlled via haptic foot interfaces, assist the task of the hands. Such a system opens the door for simultaneous control of four laparoscopic tools by the same user. Each hand controls a manipulative tool while a foot controls an endoscope/camera and another controls an actuated gripper. In this scenario, the surgeon and robots need to work collaboratively within a concurrent workspace, while meeting the precision demands of surgery. To this end, we propose a control framework for the robotic arms that deals with all the task- and safety-related constraints. Furthermore, to ease the control through the feet, two assistance modalities are proposed: adaptive visual tracking of the laparoscopic instruments with the camera and grasping assistance for the gripper. A user study is conducted on twelve subjects to highlight the ease of use of the system and to evaluate the relevance of the proposed shared control strategies. The results confirm the feasibility of four-arm surgical-like tasks without extensive training in tasks that involve visual-tracking and manipulation goals for the feet, as well as coordination with both hands. Moreover, our study characterizes and motivates the use of robotic assistance for reducing task load, improving performance, increasing fluency, and eliciting higher coordination during four-arm laparoscopic tasks.
{"title":"Enabling four-arm laparoscopic surgery by controlling two robotic assistants via haptic foot interfaces","authors":"Jacob Hernandez Sanchez, Walid Amanhoud, A. Billard, M. Bouri","doi":"10.1177/02783649231180366","DOIUrl":"https://doi.org/10.1177/02783649231180366","url":null,"abstract":"Robotic surgery is a promising direction to improve surgeons and assistants’ daily life with respect to conventional surgery. In this work, we propose solo laparoscopic surgery in which two robotic arms, controlled via haptic foot interfaces, assist the task of the hands. Such a system opens the door for simultaneous control of four laparoscopic tools by the same user. Each hand controls a manipulative tool while a foot controls an endoscope/camera and another controls an actuated gripper. In this scenario, the surgeon and robots need to work collaboratively within a concurrent workspace, while meeting the precision demands of surgery. To this end, we propose a control framework for the robotic arms that deals with all the task- and safety-related constraints. Furthermore, to ease the control through the feet, two assistance modalities are proposed: adaptive visual tracking of the laparoscopic instruments with the camera and grasping assistance for the gripper. A user study is conducted on twelve subjects to highlight the ease of use of the system and to evaluate the relevance of the proposed shared control strategies. The results confirm the feasibility of four-arm surgical-like tasks without extensive training in tasks that involve visual-tracking and manipulation goals for the feet, as well as coordination with both hands. Moreover, our study characterizes and motivates the use of robotic assistance for reducing task load, improving performance, increasing fluency, and eliciting higher coordination during four-arm laparoscopic tasks.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41513852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-13DOI: 10.1177/02783649231179753
Fangxun Zhong, Yun-hui Liu
Agile maneuvers are essential for robot-enabled complex tasks such as surgical procedures. Prior explorations on surgery autonomy are limited to feasibility study of completing a single task without systematically addressing generic manipulation safety across different tasks. We present an integrated planning and control framework for 6-DoF robotic instruments for pipeline automation of surgical tasks. We leverage the geometry of a robotic instrument and propose the nodal state space to represent the robot state in SE(3) space. Each elementary robot motion could be encoded by regulation of the state parameters via a dynamical system. This theoretically ensures that every in-process trajectory is globally feasible and stably reached to an admissible target, and the controller is of closed-form without computing 6-DoF inverse kinematics. Then, to plan the motion steps reliably, we propose an interactive (instant) goal state of the robot that transforms manipulation planning through desired path constraints into a goal-varying manipulation (GVM) problem. We detail how GVM could adaptively and smoothly plan the procedure (could proceed or rewind the process as needed) based on on-the-fly situations under dynamic or disturbed environment. Finally, we extend the above policy to characterize complete pipelines of various surgical tasks. Simulations show that our framework could smoothly solve twisted maneuvers while avoiding collisions. Physical experiments using the da Vinci Research Kit validates the capability of automating individual tasks including tissue debridement, dissection, and wound suturing. The results confirm good task-level consistency and reliability compared to state-of-the-art automation algorithms.
{"title":"Integrated planning and control of robotic surgical instruments for task autonomy","authors":"Fangxun Zhong, Yun-hui Liu","doi":"10.1177/02783649231179753","DOIUrl":"https://doi.org/10.1177/02783649231179753","url":null,"abstract":"Agile maneuvers are essential for robot-enabled complex tasks such as surgical procedures. Prior explorations on surgery autonomy are limited to feasibility study of completing a single task without systematically addressing generic manipulation safety across different tasks. We present an integrated planning and control framework for 6-DoF robotic instruments for pipeline automation of surgical tasks. We leverage the geometry of a robotic instrument and propose the nodal state space to represent the robot state in SE(3) space. Each elementary robot motion could be encoded by regulation of the state parameters via a dynamical system. This theoretically ensures that every in-process trajectory is globally feasible and stably reached to an admissible target, and the controller is of closed-form without computing 6-DoF inverse kinematics. Then, to plan the motion steps reliably, we propose an interactive (instant) goal state of the robot that transforms manipulation planning through desired path constraints into a goal-varying manipulation (GVM) problem. We detail how GVM could adaptively and smoothly plan the procedure (could proceed or rewind the process as needed) based on on-the-fly situations under dynamic or disturbed environment. Finally, we extend the above policy to characterize complete pipelines of various surgical tasks. Simulations show that our framework could smoothly solve twisted maneuvers while avoiding collisions. Physical experiments using the da Vinci Research Kit validates the capability of automating individual tasks including tissue debridement, dissection, and wound suturing. The results confirm good task-level consistency and reliability compared to state-of-the-art automation algorithms.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2023-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42888141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-09DOI: 10.1177/02783649231177322
Clémentin Boittiaux, C. Dune, Maxime Ferrera, A. Arnaubec, R. Marxer, M. Matabos, Loïc Van Audenhaege, Vincent Hugel
Visual localization plays an important role in the positioning and navigation of robotics systems within previously visited environments. When visits occur over long periods of time, changes in the environment related to seasons or day-night cycles present a major challenge. Under water, the sources of variability are due to other factors such as water conditions or growth of marine organisms. Yet, it remains a major obstacle and a much less studied one, partly due to the lack of data. This paper presents a new deep-sea dataset to benchmark underwater long-term visual localization. The dataset is composed of images from four visits to the same hydrothermal vent edifice over the course of 5 years. Camera poses and a common geometry of the scene were estimated using navigation data and Structure-from-Motion. This serves as a reference when evaluating visual localization techniques. An analysis of the data provides insights about the major changes observed throughout the years. Furthermore, several well-established visual localization methods are evaluated on the dataset, showing there is still room for improvement in underwater long-term visual localization. The data is made publicly available at seanoe.org/data/00810/92226/.
{"title":"Eiffel Tower: A deep-sea underwater dataset for long-term visual localization","authors":"Clémentin Boittiaux, C. Dune, Maxime Ferrera, A. Arnaubec, R. Marxer, M. Matabos, Loïc Van Audenhaege, Vincent Hugel","doi":"10.1177/02783649231177322","DOIUrl":"https://doi.org/10.1177/02783649231177322","url":null,"abstract":"Visual localization plays an important role in the positioning and navigation of robotics systems within previously visited environments. When visits occur over long periods of time, changes in the environment related to seasons or day-night cycles present a major challenge. Under water, the sources of variability are due to other factors such as water conditions or growth of marine organisms. Yet, it remains a major obstacle and a much less studied one, partly due to the lack of data. This paper presents a new deep-sea dataset to benchmark underwater long-term visual localization. The dataset is composed of images from four visits to the same hydrothermal vent edifice over the course of 5 years. Camera poses and a common geometry of the scene were estimated using navigation data and Structure-from-Motion. This serves as a reference when evaluating visual localization techniques. An analysis of the data provides insights about the major changes observed throughout the years. Furthermore, several well-established visual localization methods are evaluated on the dataset, showing there is still room for improvement in underwater long-term visual localization. The data is made publicly available at seanoe.org/data/00810/92226/.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2023-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46759361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.1177/02783649231170897
Eric Huang, Xianyi Cheng, Yuemin Mao, Arnav Gupta, M. T. Mason
The central theme in robotic manipulation is that of the robot interacting with the world through physical contact. We tend to describe that physical contact using specific words that capture the nature of the contact and the action, such as grasp, roll, pivot, push, pull, tilt, close, open etc. We refer to these situation-specific actions as manipulation primitives. Due to the nonlinear and nonsmooth nature of physical interaction, roboticists have devoted significant efforts towards studying individual manipulation primitives. However, studying individual primitives one by one is an inherently limited process, due engineering costs, overfitting to specific tasks, and lack of robustness to unforeseen variations. These limitations motivate the main contribution of this paper: a complete and general framework to autogenerate manipulation primitives. To do so, we develop the theory and computation of contact modes as a means to classify and enumerate manipulation primitives. The contact modes form a graph, specifically a lattice. Our algorithm to autogenerate manipulation primitives (AMP) performs graph-based optimization on the contact mode lattice and solves a linear program to generate each primitive. We designed several experiments to validate our approach. We benchmarked a wide range of contact scenarios and our pipeline’s runtime was consistently in the 10 s of milliseconds. In simulation, we planned manipulation sequences using AMP. In the real-world, we showcased the robustness of our approach to real-world modeling errors. We hope that our contributions will lead to more general and robust approaches for robotic manipulation.
{"title":"Autogenerated manipulation primitives","authors":"Eric Huang, Xianyi Cheng, Yuemin Mao, Arnav Gupta, M. T. Mason","doi":"10.1177/02783649231170897","DOIUrl":"https://doi.org/10.1177/02783649231170897","url":null,"abstract":"The central theme in robotic manipulation is that of the robot interacting with the world through physical contact. We tend to describe that physical contact using specific words that capture the nature of the contact and the action, such as grasp, roll, pivot, push, pull, tilt, close, open etc. We refer to these situation-specific actions as manipulation primitives. Due to the nonlinear and nonsmooth nature of physical interaction, roboticists have devoted significant efforts towards studying individual manipulation primitives. However, studying individual primitives one by one is an inherently limited process, due engineering costs, overfitting to specific tasks, and lack of robustness to unforeseen variations. These limitations motivate the main contribution of this paper: a complete and general framework to autogenerate manipulation primitives. To do so, we develop the theory and computation of contact modes as a means to classify and enumerate manipulation primitives. The contact modes form a graph, specifically a lattice. Our algorithm to autogenerate manipulation primitives (AMP) performs graph-based optimization on the contact mode lattice and solves a linear program to generate each primitive. We designed several experiments to validate our approach. We benchmarked a wide range of contact scenarios and our pipeline’s runtime was consistently in the 10 s of milliseconds. In simulation, we planned manipulation sequences using AMP. In the real-world, we showcased the robustness of our approach to real-world modeling errors. We hope that our contributions will lead to more general and robust approaches for robotic manipulation.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42313108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-01DOI: 10.1177/02783649231187014
Jingjin Yu, Ming C. Lin
{"title":"Selected papers from WAFR2020","authors":"Jingjin Yu, Ming C. Lin","doi":"10.1177/02783649231187014","DOIUrl":"https://doi.org/10.1177/02783649231187014","url":null,"abstract":"","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135383143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.1177/02783649231170893
Brad Saund, Sanjiban Choudhury, S. Srinivasa, D. Berenson
We address the problem of robot motion planning under uncertainty where the only observations are through contact with the environment. Such problems are typically solved by planning optimistically assuming unknown space is free, moving along the planned path and re-planning if the robot collides. However this approach can be very inefficient, leading to many unnecessary collisions and unproductive motion. We propose a new formulation, the Blindfolded Traveler’s Problem (BTP), for planning on a graph containing edges with unknown validity, with true validity observed only through attempted traversal by the robot. The solution to a BTP is a policy indicating the next edge to attempt given previous observations and an initial belief. We prove that BTP is NP-complete and show that exact modeling of the belief is intractable, therefore we present several approximation-based policies and beliefs. For the policy we propose graph search with edge weights augmented by the probability of collision. For the belief representation we propose a weighted Mixture of Experts of Collision Hypothesis Sets and a Manifold Particle Filter. Empirical evaluation in simulation and on a real robot arm shows that our proposed approach vastly outperforms several baselines as well as a previous approach that does not employ the BTP framework.
{"title":"The blindfolded traveler’s problem: A search framework for motion planning with contact estimates","authors":"Brad Saund, Sanjiban Choudhury, S. Srinivasa, D. Berenson","doi":"10.1177/02783649231170893","DOIUrl":"https://doi.org/10.1177/02783649231170893","url":null,"abstract":"We address the problem of robot motion planning under uncertainty where the only observations are through contact with the environment. Such problems are typically solved by planning optimistically assuming unknown space is free, moving along the planned path and re-planning if the robot collides. However this approach can be very inefficient, leading to many unnecessary collisions and unproductive motion. We propose a new formulation, the Blindfolded Traveler’s Problem (BTP), for planning on a graph containing edges with unknown validity, with true validity observed only through attempted traversal by the robot. The solution to a BTP is a policy indicating the next edge to attempt given previous observations and an initial belief. We prove that BTP is NP-complete and show that exact modeling of the belief is intractable, therefore we present several approximation-based policies and beliefs. For the policy we propose graph search with edge weights augmented by the probability of collision. For the belief representation we propose a weighted Mixture of Experts of Collision Hypothesis Sets and a Manifold Particle Filter. Empirical evaluation in simulation and on a real robot arm shows that our proposed approach vastly outperforms several baselines as well as a previous approach that does not employ the BTP framework.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44521125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.1177/02783649231171646
Mengyu Fu, A. Kuntz, Oren Salzman, R. Alterovitz
Inspection planning, the task of planning motions for a robot that enable it to inspect a set of points of interest, has applications in domains such as industrial, field, and medical robotics. Inspection planning can be computationally challenging, as the search space over motion plans grows exponentially with the number of points of interest to inspect. We propose a novel method, Incremental Random Inspection-roadmap Search (IRIS), that computes inspection plans whose length and set of successfully inspected points asymptotically converge to those of an optimal inspection plan. IRIS incrementally densifies a motion-planning roadmap using a sampling-based algorithm and performs efficient near-optimal graph search over the resulting roadmap as it is generated. We prove the resulting algorithm is asymptotically optimal under very general assumptions about the robot and the environment. We demonstrate IRIS’s efficacy on a simulated inspection task with a planar five DOF manipulator, on a simulated bridge inspection task with an Unmanned Aerial Vehicle (UAV), and on a medical endoscopic inspection task for a continuum parallel surgical robot in cluttered human anatomy. In all these systems IRIS computes higher-quality inspection plans orders of magnitudes faster than a prior state-of-the-art method.
{"title":"Asymptotically optimal inspection planning via efficient near-optimal search on sampled roadmaps","authors":"Mengyu Fu, A. Kuntz, Oren Salzman, R. Alterovitz","doi":"10.1177/02783649231171646","DOIUrl":"https://doi.org/10.1177/02783649231171646","url":null,"abstract":"Inspection planning, the task of planning motions for a robot that enable it to inspect a set of points of interest, has applications in domains such as industrial, field, and medical robotics. Inspection planning can be computationally challenging, as the search space over motion plans grows exponentially with the number of points of interest to inspect. We propose a novel method, Incremental Random Inspection-roadmap Search (IRIS), that computes inspection plans whose length and set of successfully inspected points asymptotically converge to those of an optimal inspection plan. IRIS incrementally densifies a motion-planning roadmap using a sampling-based algorithm and performs efficient near-optimal graph search over the resulting roadmap as it is generated. We prove the resulting algorithm is asymptotically optimal under very general assumptions about the robot and the environment. We demonstrate IRIS’s efficacy on a simulated inspection task with a planar five DOF manipulator, on a simulated bridge inspection task with an Unmanned Aerial Vehicle (UAV), and on a medical endoscopic inspection task for a continuum parallel surgical robot in cluttered human anatomy. In all these systems IRIS computes higher-quality inspection plans orders of magnitudes faster than a prior state-of-the-art method.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46384380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.1177/02783649231180249
T. Asfour, Jaeheung Park, E. Yoshida
This special issue is organized by selected papers presented at International Symposium on Robotics Research (ISRR 2019), held onOctober 6–10, 2019, at Hanoi, Vietnam. The conference enjoyed the distinguished talks by renowned invited speakers as well as active discussions at poster presentations of contributed papers on recent research. Following the strong relationship of the ISRR conferences with IJRR since its early years, the Program Chair and Co-Chairs proposed this special issue to the editorial board as guest editors and invited authors of contributed papers that received high review scores among many highquality papers to submit extended versions of the work. The guest editors would like to express their sincere gratitude to the authors for their patience by the review process that was slowed down due to the difficulties caused by the worldwide pandemic and are very delighted this special issue is finally complete. The special issue reflects the recent advancement of artificial intelligence and autonomy in different areas of robotics, but also includes the fundamental contributions that have been representing the tradition of ISRR, forming a good mixture covering broad areas of robotics research despite the small number of papers.While the conferencewas held 3 years back, we believe the special issue offers an excellent collection including updates with recent results. We have two papers that benefit from advanced research of artificial intelligence and autonomous agents especially useful for high-level planning. The first paper “Automatic Encoding and Repair of Reactive High-Level Tasks with Learned Abstract Representations” by Adam Pacheck, Steven James, George Konidaris, and Hadas Kress-Gazit presents skill-based framework enabling execution of reactive high-level tasks, by encoding robot skills with learned abstract sensor data to find a feasible plan for the task to be achieved. The second paper entitled “Multilevel Monte Carlo for Solving POMDPs Online” by Marcus Hoerger, Hanna Kurniawati, and Alberto Elfes addresses the challenging planning problem of Partially Observable Markov Decision Process (POMDP) with complex nonlinear dynamics, through an original approach of multi-level POMDP planner to obtain near-optimal solutions efficiently. The robotics fundamental contributions are about algorithmic foundation for manipulation and innovative robotics design and two papers are included for each topic. “The Blindfolded Traveler’s Problem: A Search Framework for Motion Planning with Contact Estimates" by Bradley Saund, Sanjiban Choudhury, Siddhartha Srinivasa, and Dmitry Berenson tackles planning on a graph containing unknown validity by novel policy-belief combination that expresses the collision probability and expert hypothesis about the collision at the same time. The secondmanipulation study is “The Certified Grasping” by Bernardo Aceituno-Cabezas, Jose Ballester, and Alberto Rodriguez. This paper proposes the idea of certificates
{"title":"Selected papers from ISRR'2019","authors":"T. Asfour, Jaeheung Park, E. Yoshida","doi":"10.1177/02783649231180249","DOIUrl":"https://doi.org/10.1177/02783649231180249","url":null,"abstract":"This special issue is organized by selected papers presented at International Symposium on Robotics Research (ISRR 2019), held onOctober 6–10, 2019, at Hanoi, Vietnam. The conference enjoyed the distinguished talks by renowned invited speakers as well as active discussions at poster presentations of contributed papers on recent research. Following the strong relationship of the ISRR conferences with IJRR since its early years, the Program Chair and Co-Chairs proposed this special issue to the editorial board as guest editors and invited authors of contributed papers that received high review scores among many highquality papers to submit extended versions of the work. The guest editors would like to express their sincere gratitude to the authors for their patience by the review process that was slowed down due to the difficulties caused by the worldwide pandemic and are very delighted this special issue is finally complete. The special issue reflects the recent advancement of artificial intelligence and autonomy in different areas of robotics, but also includes the fundamental contributions that have been representing the tradition of ISRR, forming a good mixture covering broad areas of robotics research despite the small number of papers.While the conferencewas held 3 years back, we believe the special issue offers an excellent collection including updates with recent results. We have two papers that benefit from advanced research of artificial intelligence and autonomous agents especially useful for high-level planning. The first paper “Automatic Encoding and Repair of Reactive High-Level Tasks with Learned Abstract Representations” by Adam Pacheck, Steven James, George Konidaris, and Hadas Kress-Gazit presents skill-based framework enabling execution of reactive high-level tasks, by encoding robot skills with learned abstract sensor data to find a feasible plan for the task to be achieved. The second paper entitled “Multilevel Monte Carlo for Solving POMDPs Online” by Marcus Hoerger, Hanna Kurniawati, and Alberto Elfes addresses the challenging planning problem of Partially Observable Markov Decision Process (POMDP) with complex nonlinear dynamics, through an original approach of multi-level POMDP planner to obtain near-optimal solutions efficiently. The robotics fundamental contributions are about algorithmic foundation for manipulation and innovative robotics design and two papers are included for each topic. “The Blindfolded Traveler’s Problem: A Search Framework for Motion Planning with Contact Estimates\" by Bradley Saund, Sanjiban Choudhury, Siddhartha Srinivasa, and Dmitry Berenson tackles planning on a graph containing unknown validity by novel policy-belief combination that expresses the collision probability and expert hypothesis about the collision at the same time. The secondmanipulation study is “The Certified Grasping” by Bernardo Aceituno-Cabezas, Jose Ballester, and Alberto Rodriguez. This paper proposes the idea of certificates","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49048863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01DOI: 10.1177/02783649231166977
Fadri Furrer, Tonci Novkovic, M. Fehr, Margarita Grinvald, César Cadena, Juan I. Nieto, R. Siegwart
The capabilities of discovering new knowledge and updating the previously acquired one are crucial for deploying autonomous robots in unknown and changing environments. Spatial and objectness concepts are at the basis of several robotic functionalities and are part of the intuitive understanding of the physical world for us humans. In this paper, we propose a method, which we call Modelify, to incrementally map the environment at the level of objects in a consistent manner. We follow an approach where no prior knowledge of the environment is required. The only assumption we make is that objects in the environment are separated by concave boundaries. The approach works on an RGB-D camera stream, where object-like segments are extracted and stored in an incremental database. Segment description and matching are performed by exploiting 2D and 3D information, allowing to build a graph of all segments. Finally, a matching score guides a Markov clustering algorithm to merge segments, thus completing object representations. Our approach allows creating single (merged) instances of repeating objects, objects that were observed from different viewpoints, and objects that were observed in previous mapping sessions. Thanks to our matching and merging strategies this also works with only partially overlapping segments. We perform evaluations on indoor and outdoor datasets recorded with different RGB-D sensors and show the benefit of using a clustering method to form merge candidates and keypoints detected in both 2D and 3D. Our new method shows better results than previous approaches while being significantly faster. A newly recorded dataset and the source code are released with this publication.
{"title":"Modelify: An approach to incrementally build 3D object models for map completion","authors":"Fadri Furrer, Tonci Novkovic, M. Fehr, Margarita Grinvald, César Cadena, Juan I. Nieto, R. Siegwart","doi":"10.1177/02783649231166977","DOIUrl":"https://doi.org/10.1177/02783649231166977","url":null,"abstract":"The capabilities of discovering new knowledge and updating the previously acquired one are crucial for deploying autonomous robots in unknown and changing environments. Spatial and objectness concepts are at the basis of several robotic functionalities and are part of the intuitive understanding of the physical world for us humans. In this paper, we propose a method, which we call Modelify, to incrementally map the environment at the level of objects in a consistent manner. We follow an approach where no prior knowledge of the environment is required. The only assumption we make is that objects in the environment are separated by concave boundaries. The approach works on an RGB-D camera stream, where object-like segments are extracted and stored in an incremental database. Segment description and matching are performed by exploiting 2D and 3D information, allowing to build a graph of all segments. Finally, a matching score guides a Markov clustering algorithm to merge segments, thus completing object representations. Our approach allows creating single (merged) instances of repeating objects, objects that were observed from different viewpoints, and objects that were observed in previous mapping sessions. Thanks to our matching and merging strategies this also works with only partially overlapping segments. We perform evaluations on indoor and outdoor datasets recorded with different RGB-D sensors and show the benefit of using a clustering method to form merge candidates and keypoints detected in both 2D and 3D. Our new method shows better results than previous approaches while being significantly faster. A newly recorded dataset and the source code are released with this publication.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41976633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-03DOI: 10.1177/02783649231182453
Lasse Peters, Vicencc Rubies-Royo, C. Tomlin, L. Ferranti, Javier Alonso-Mora, C. Stachniss, David Fridovich-Keil
Robots deployed to the real world must be able to interact with other agents in their environment. Dynamic game theory provides a powerful mathematical framework for modeling scenarios in which agents have individual objectives and interactions evolve over time. However, a key limitation of such techniques is that they require a priori knowledge of all players’ objectives. In this work, we address this issue by proposing a novel method for learning players’ objectives in continuous dynamic games from noise-corrupted, partial state observations. Our approach learns objectives by coupling the estimation of unknown cost parameters of each player with inference of unobserved states and inputs through Nash equilibrium constraints. By coupling past state estimates with future state predictions, our approach is amenable to simultaneous online learning and prediction in receding horizon fashion. We demonstrate our method in several simulated traffic scenarios in which we recover players’ preferences, for, e.g. desired travel speed and collision-avoidance behavior. Results show that our method reliably estimates game-theoretic models from noise-corrupted data that closely matches ground-truth objectives, consistently outperforming state-of-the-art approaches.
{"title":"Online and offline learning of player objectives from partial observations in dynamic games","authors":"Lasse Peters, Vicencc Rubies-Royo, C. Tomlin, L. Ferranti, Javier Alonso-Mora, C. Stachniss, David Fridovich-Keil","doi":"10.1177/02783649231182453","DOIUrl":"https://doi.org/10.1177/02783649231182453","url":null,"abstract":"Robots deployed to the real world must be able to interact with other agents in their environment. Dynamic game theory provides a powerful mathematical framework for modeling scenarios in which agents have individual objectives and interactions evolve over time. However, a key limitation of such techniques is that they require a priori knowledge of all players’ objectives. In this work, we address this issue by proposing a novel method for learning players’ objectives in continuous dynamic games from noise-corrupted, partial state observations. Our approach learns objectives by coupling the estimation of unknown cost parameters of each player with inference of unobserved states and inputs through Nash equilibrium constraints. By coupling past state estimates with future state predictions, our approach is amenable to simultaneous online learning and prediction in receding horizon fashion. We demonstrate our method in several simulated traffic scenarios in which we recover players’ preferences, for, e.g. desired travel speed and collision-avoidance behavior. Results show that our method reliably estimates game-theoretic models from noise-corrupted data that closely matches ground-truth objectives, consistently outperforming state-of-the-art approaches.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":9.2,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44849379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}