Pub Date : 2024-09-01Epub Date: 2024-04-03DOI: 10.1177/02783649241230993
Charles Champagne Cossette, Mohammed Ayman Shalaby, David Saussié, James Richard Forbes
This paper addresses the problem of decentralized, collaborative state estimation in robotic teams. In particular, this paper considers problems where individual robots estimate similar physical quantities, such as each other's position relative to themselves. The use of pseudomeasurements is introduced as a means of modeling such relationships between robots' state estimates and is shown to be a tractable way to approach the decentralized state estimation problem. Moreover, this formulation easily leads to a general-purpose observability test that simultaneously accounts for measurements that robots collect from their own sensors, as well as the communication structure within the team. Finally, input preintegration is proposed as a communication-efficient way of sharing odometry information between robots, and the entire theory is appropriate for both vector-space and Lie-group state definitions. To overcome the need for communicating preintegrated covariance information, a deep autoencoder is proposed that reconstructs the covariance information from the inputs, hence further reducing the communication requirements. The proposed framework is evaluated on three different simulated problems, and one experiment involving three quadcopters.
{"title":"Decentralized state estimation: An approach using pseudomeasurements and preintegration.","authors":"Charles Champagne Cossette, Mohammed Ayman Shalaby, David Saussié, James Richard Forbes","doi":"10.1177/02783649241230993","DOIUrl":"https://doi.org/10.1177/02783649241230993","url":null,"abstract":"<p><p>This paper addresses the problem of decentralized, collaborative state estimation in robotic teams. In particular, this paper considers problems where individual robots estimate similar physical quantities, such as each other's position relative to themselves. The use of <i>pseudomeasurements</i> is introduced as a means of modeling such relationships between robots' state estimates and is shown to be a tractable way to approach the decentralized state estimation problem. Moreover, this formulation easily leads to a general-purpose observability test that simultaneously accounts for measurements that robots collect from their own sensors, as well as the communication structure within the team. Finally, input preintegration is proposed as a communication-efficient way of sharing odometry information between robots, and the entire theory is appropriate for both vector-space and Lie-group state definitions. To overcome the need for communicating preintegrated covariance information, a deep autoencoder is proposed that reconstructs the covariance information from the inputs, hence further reducing the communication requirements. The proposed framework is evaluated on three different simulated problems, and one experiment involving three quadcopters.</p>","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11455620/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-14DOI: 10.1177/02783649231210593
Inrak Choi, Sohee John Yoon, Yong-Lae Park
Muscles in animals and actuation systems in advanced robots consist not of the actuation component alone; the motive, dissipative, and proprioceptive components exist in a complete set to achieve versatile and precise manipulation tasks. We present such a system as a linear electrostatic actuator package incorporated with sensing and braking components. Our modular actuator design is composed of these actuator films and a dielectric fluid, and we examine the performance of the proposed system both theoretically and experimentally. In addition, we introduce a mechanism of optical proprioceptive sensing utilizing the Moiré pattern innately generated on the actuator surface, which allows high-resolution reading of the position of the actuator without noise. The optical sensor is also capable of measuring the force exerted by the actuator. Lastly, we add an electroadhesive brake in the package in parallel with the actuator, introducing a method of mode switching that utilizes all three components and presenting control demonstrations with a robot arm. Our actuation system is compact and flexible and can be easily integrated with various robotic applications.
{"title":"Linear electrostatic actuators with Moiré-effect optical proprioceptive sensing and electroadhesive braking","authors":"Inrak Choi, Sohee John Yoon, Yong-Lae Park","doi":"10.1177/02783649231210593","DOIUrl":"https://doi.org/10.1177/02783649231210593","url":null,"abstract":"Muscles in animals and actuation systems in advanced robots consist not of the actuation component alone; the motive, dissipative, and proprioceptive components exist in a complete set to achieve versatile and precise manipulation tasks. We present such a system as a linear electrostatic actuator package incorporated with sensing and braking components. Our modular actuator design is composed of these actuator films and a dielectric fluid, and we examine the performance of the proposed system both theoretically and experimentally. In addition, we introduce a mechanism of optical proprioceptive sensing utilizing the Moiré pattern innately generated on the actuator surface, which allows high-resolution reading of the position of the actuator without noise. The optical sensor is also capable of measuring the force exerted by the actuator. Lastly, we add an electroadhesive brake in the package in parallel with the actuator, introducing a method of mode switching that utilizes all three components and presenting control demonstrations with a robot arm. Our actuation system is compact and flexible and can be easily integrated with various robotic applications.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134900916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-10DOI: 10.1177/02783649231215372
Jose Cuaran, Andres Eduardo Baquero Velasquez, Mateus Valverde Gasparino, Naveen Kumar Uppalapati, Arun Narenthiran Sivakumar, Justin Wasserman, Muhammad Huzaifa, Sarita Adve, Girish Chowdhary
Simultaneous localization and mapping (SLAM) has been an active research problem over recent decades. Many leading solutions are available that can achieve remarkable performance in environments with familiar structure, such as indoors and cities. However, our work shows that these leading systems fail in an agricultural setting, particularly in under the canopy navigation in the largest-in-acreage crops of the world: corn ( Zea mays) and soybean ( Glycine max). The presence of plenty of visual clutter due to leaves, varying illumination, and stark visual similarity makes these environments lose the familiar structure on which SLAM algorithms rely on. To advance SLAM in such unstructured agricultural environments, we present a comprehensive agricultural dataset. Our open dataset consists of stereo images, IMUs, wheel encoders, and GPS measurements continuously recorded from a mobile robot in corn and soybean fields across different growth stages. In addition, we present best-case benchmark results for several leading visual-inertial odometry and SLAM systems. Our data and benchmark clearly show that there is significant research promise in SLAM for agricultural settings. The dataset is available online at: https://github.com/jrcuaranv/terrasentia-dataset .
{"title":"Under-canopy dataset for advancing simultaneous localization and mapping in agricultural robotics","authors":"Jose Cuaran, Andres Eduardo Baquero Velasquez, Mateus Valverde Gasparino, Naveen Kumar Uppalapati, Arun Narenthiran Sivakumar, Justin Wasserman, Muhammad Huzaifa, Sarita Adve, Girish Chowdhary","doi":"10.1177/02783649231215372","DOIUrl":"https://doi.org/10.1177/02783649231215372","url":null,"abstract":"Simultaneous localization and mapping (SLAM) has been an active research problem over recent decades. Many leading solutions are available that can achieve remarkable performance in environments with familiar structure, such as indoors and cities. However, our work shows that these leading systems fail in an agricultural setting, particularly in under the canopy navigation in the largest-in-acreage crops of the world: corn ( Zea mays) and soybean ( Glycine max). The presence of plenty of visual clutter due to leaves, varying illumination, and stark visual similarity makes these environments lose the familiar structure on which SLAM algorithms rely on. To advance SLAM in such unstructured agricultural environments, we present a comprehensive agricultural dataset. Our open dataset consists of stereo images, IMUs, wheel encoders, and GPS measurements continuously recorded from a mobile robot in corn and soybean fields across different growth stages. In addition, we present best-case benchmark results for several leading visual-inertial odometry and SLAM systems. Our data and benchmark clearly show that there is significant research promise in SLAM for agricultural settings. The dataset is available online at: https://github.com/jrcuaranv/terrasentia-dataset .","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135091453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-09DOI: 10.1177/02783649231209337
Andreas Orthey, Sohaib Akbar, Marc Toussaint
High-dimensional motion planning problems can often be solved significantly faster by using multilevel abstractions. While there are various ways to formally capture multilevel abstractions, we formulate them in terms of fiber bundles. Fiber bundles essentially describe lower-dimensional projections of the state space using local product spaces, which allows us to concisely describe and derive novel algorithms in terms of bundle restrictions and bundle sections. Given such a structure and a corresponding admissible constraint function, we develop highly efficient and asymptotically optimal sampling-based motion planning methods for high-dimensional state spaces. Those methods exploit the structure of fiber bundles through the use of bundle primitives. Those primitives are used to create novel bundle planners, the rapidly-exploring quotient space trees (QRRT*), and the quotient space roadmap planner (QMP*). Both planners are shown to be probabilistically complete and almost-surely asymptotically optimal. To evaluate our bundle planners, we compare them against classical sampling-based planners on benchmarks of four low-dimensional scenarios, and eight high-dimensional scenarios, ranging from 21 to 100 degrees of freedom, including multiple robots and nonholonomic constraints. Our findings show improvements up to two to six orders of magnitude and underline the efficiency of multilevel motion planners and the benefit of exploiting multilevel abstractions using the terminology of fiber bundles.
{"title":"Multilevel motion planning: A fiber bundle formulation","authors":"Andreas Orthey, Sohaib Akbar, Marc Toussaint","doi":"10.1177/02783649231209337","DOIUrl":"https://doi.org/10.1177/02783649231209337","url":null,"abstract":"High-dimensional motion planning problems can often be solved significantly faster by using multilevel abstractions. While there are various ways to formally capture multilevel abstractions, we formulate them in terms of fiber bundles. Fiber bundles essentially describe lower-dimensional projections of the state space using local product spaces, which allows us to concisely describe and derive novel algorithms in terms of bundle restrictions and bundle sections. Given such a structure and a corresponding admissible constraint function, we develop highly efficient and asymptotically optimal sampling-based motion planning methods for high-dimensional state spaces. Those methods exploit the structure of fiber bundles through the use of bundle primitives. Those primitives are used to create novel bundle planners, the rapidly-exploring quotient space trees (QRRT*), and the quotient space roadmap planner (QMP*). Both planners are shown to be probabilistically complete and almost-surely asymptotically optimal. To evaluate our bundle planners, we compare them against classical sampling-based planners on benchmarks of four low-dimensional scenarios, and eight high-dimensional scenarios, ranging from 21 to 100 degrees of freedom, including multiple robots and nonholonomic constraints. Our findings show improvements up to two to six orders of magnitude and underline the efficiency of multilevel motion planners and the benefit of exploiting multilevel abstractions using the terminology of fiber bundles.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135191688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-09DOI: 10.1177/02783649231213117
Jeongyun Kim, Myung-Hwan Jeon, Sangwoo Jung, Wooseong Yang, Minwoo Jung, Jaeho Shin, Ayoung Kim
Transparent objects are encountered frequently in our daily lives, yet recognizing them poses challenges for conventional vision sensors due to their unique material properties, not being well perceived from RGB or depth cameras. Overcoming this limitation, thermal infrared cameras have emerged as a solution, offering improved visibility and shape information for transparent objects. In this paper, we present TRansPose, the first large-scale multispectral dataset that combines stereo RGB-D, thermal infrared (TIR) images, and object poses to promote transparent object research. The dataset includes 99 transparent objects, encompassing 43 household items, 27 recyclable trashes, 29 chemical laboratory equivalents, and 12 non-transparent objects. It comprises a vast collection of 333,819 images and 4,000,056 annotations, providing instance-level segmentation masks, ground-truth poses, and completed depth information. The data was acquired using an FLIR A65 thermal infrared camera, two Intel RealSense L515 RGB-D cameras, and a Franka Emika Panda robot manipulator. Spanning 87 sequences, TRansPose covers various challenging real-life scenarios, including objects filled with water, diverse lighting conditions, heavy clutter, non-transparent or translucent containers, objects in plastic bags, and multi-stacked objects. Supplementary material can be accessed from the following link: https://sites.google.com/view/transpose-dataset .
{"title":"TRansPose: Large-scale multispectral dataset for transparent object","authors":"Jeongyun Kim, Myung-Hwan Jeon, Sangwoo Jung, Wooseong Yang, Minwoo Jung, Jaeho Shin, Ayoung Kim","doi":"10.1177/02783649231213117","DOIUrl":"https://doi.org/10.1177/02783649231213117","url":null,"abstract":"Transparent objects are encountered frequently in our daily lives, yet recognizing them poses challenges for conventional vision sensors due to their unique material properties, not being well perceived from RGB or depth cameras. Overcoming this limitation, thermal infrared cameras have emerged as a solution, offering improved visibility and shape information for transparent objects. In this paper, we present TRansPose, the first large-scale multispectral dataset that combines stereo RGB-D, thermal infrared (TIR) images, and object poses to promote transparent object research. The dataset includes 99 transparent objects, encompassing 43 household items, 27 recyclable trashes, 29 chemical laboratory equivalents, and 12 non-transparent objects. It comprises a vast collection of 333,819 images and 4,000,056 annotations, providing instance-level segmentation masks, ground-truth poses, and completed depth information. The data was acquired using an FLIR A65 thermal infrared camera, two Intel RealSense L515 RGB-D cameras, and a Franka Emika Panda robot manipulator. Spanning 87 sequences, TRansPose covers various challenging real-life scenarios, including objects filled with water, diverse lighting conditions, heavy clutter, non-transparent or translucent containers, objects in plastic bags, and multi-stacked objects. Supplementary material can be accessed from the following link: https://sites.google.com/view/transpose-dataset .","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135240935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-07DOI: 10.1177/02783649231207655
Guozheng Lu, Yixi Cai, Nan Chen, Fanze Kong, Yunfan Ren, Fu Zhang
We address the theoretical and practical problems related to the trajectory generation and tracking control of tail-sitter UAVs. Theoretically, we focus on the differential flatness property with full exploitation of actual UAV aerodynamic models, which lays a foundation for generating dynamically feasible trajectory and achieving high-performance tracking control. We have found that a tail-sitter is differentially flat with accurate (not simplified) aerodynamic models within the entire flight envelope, by specifying coordinate flight condition and choosing the vehicle position as the flat output. This fundamental property allows us to fully exploit the high-fidelity aerodynamic models in the trajectory planning and tracking control to achieve accurate tail-sitter flights. Particularly, an optimization-based trajectory planner for tail-sitters is proposed to design high-quality, smooth trajectories with consideration of kinodynamic constraints, singularity-free constraints, and actuator saturation. The planned trajectory of flat output is transformed into state trajectory in real time with optional consideration of wind in environments. To track the state trajectory, a global, singularity-free, and minimally parameterized on-manifold MPC is developed, which fully leverages the accurate aerodynamic model to achieve high-accuracy trajectory tracking within the whole flight envelope. The proposed algorithms are implemented on our quadrotor tail-sitter prototype, “Hong Hu,” and their effectiveness are demonstrated through extensive real-world experiments in both indoor and outdoor field tests, including agile SE(3) flight through consecutive narrow windows requiring specific attitude and with speed up to 10 m/s, typical tail-sitter maneuvers (transition, level flight, and loiter) with speed up to 20 m/s, and extremely aggressive aerobatic maneuvers (Wingover, Loop, Vertical Eight, and Cuban Eight) with acceleration up to 2.5 g. The video demonstration is available at https://youtu.be/2x_bLbVuyrk .
{"title":"Trajectory generation and tracking control for aggressive tail-sitter flights","authors":"Guozheng Lu, Yixi Cai, Nan Chen, Fanze Kong, Yunfan Ren, Fu Zhang","doi":"10.1177/02783649231207655","DOIUrl":"https://doi.org/10.1177/02783649231207655","url":null,"abstract":"We address the theoretical and practical problems related to the trajectory generation and tracking control of tail-sitter UAVs. Theoretically, we focus on the differential flatness property with full exploitation of actual UAV aerodynamic models, which lays a foundation for generating dynamically feasible trajectory and achieving high-performance tracking control. We have found that a tail-sitter is differentially flat with accurate (not simplified) aerodynamic models within the entire flight envelope, by specifying coordinate flight condition and choosing the vehicle position as the flat output. This fundamental property allows us to fully exploit the high-fidelity aerodynamic models in the trajectory planning and tracking control to achieve accurate tail-sitter flights. Particularly, an optimization-based trajectory planner for tail-sitters is proposed to design high-quality, smooth trajectories with consideration of kinodynamic constraints, singularity-free constraints, and actuator saturation. The planned trajectory of flat output is transformed into state trajectory in real time with optional consideration of wind in environments. To track the state trajectory, a global, singularity-free, and minimally parameterized on-manifold MPC is developed, which fully leverages the accurate aerodynamic model to achieve high-accuracy trajectory tracking within the whole flight envelope. The proposed algorithms are implemented on our quadrotor tail-sitter prototype, “Hong Hu,” and their effectiveness are demonstrated through extensive real-world experiments in both indoor and outdoor field tests, including agile SE(3) flight through consecutive narrow windows requiring specific attitude and with speed up to 10 m/s, typical tail-sitter maneuvers (transition, level flight, and loiter) with speed up to 20 m/s, and extremely aggressive aerobatic maneuvers (Wingover, Loop, Vertical Eight, and Cuban Eight) with acceleration up to 2.5 g. The video demonstration is available at https://youtu.be/2x_bLbVuyrk .","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135539835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-07DOI: 10.1177/02783649231210012
Pengda Mao, Rao Fu, Quan Quan
This paper presents a novel method for efficiently solving a trajectory planning problem for swarm robotics in cluttered environments. Recent research has demonstrated high success rates in real-time local trajectory planning for swarm robotics in cluttered environments, but optimizing trajectories for each robot is still computationally expensive, with a computational complexity from [Formula: see text] to [Formula: see text] where [Formula: see text] is the number of parameters in the parameterized trajectory, [Formula: see text] is precision, and [Formula: see text] is the number of iterations with respect to [Formula: see text] and [Formula: see text]. Furthermore, the swarm is difficult to move as a group. To address this issue, we define and then construct the optimal virtual tube, which includes infinite optimal trajectories. Under certain conditions, any optimal trajectory in the optimal virtual tube can be expressed as a convex combination of a finite number of optimal trajectories, with a computational complexity of [Formula: see text]. Afterward, a hierarchical approach including a planning method of the optimal virtual tube with minimizing energy and distributed model predictive control is proposed. In simulations and experiments, the proposed approach is validated and its effectiveness over other methods is demonstrated through comparison.
{"title":"Optimal virtual tube planning and control for swarm robotics","authors":"Pengda Mao, Rao Fu, Quan Quan","doi":"10.1177/02783649231210012","DOIUrl":"https://doi.org/10.1177/02783649231210012","url":null,"abstract":"This paper presents a novel method for efficiently solving a trajectory planning problem for swarm robotics in cluttered environments. Recent research has demonstrated high success rates in real-time local trajectory planning for swarm robotics in cluttered environments, but optimizing trajectories for each robot is still computationally expensive, with a computational complexity from [Formula: see text] to [Formula: see text] where [Formula: see text] is the number of parameters in the parameterized trajectory, [Formula: see text] is precision, and [Formula: see text] is the number of iterations with respect to [Formula: see text] and [Formula: see text]. Furthermore, the swarm is difficult to move as a group. To address this issue, we define and then construct the optimal virtual tube, which includes infinite optimal trajectories. Under certain conditions, any optimal trajectory in the optimal virtual tube can be expressed as a convex combination of a finite number of optimal trajectories, with a computational complexity of [Formula: see text]. Afterward, a hierarchical approach including a planning method of the optimal virtual tube with minimizing energy and distributed model predictive control is proposed. In simulations and experiments, the proposed approach is validated and its effectiveness over other methods is demonstrated through comparison.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135474810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-07DOI: 10.1177/02783649231202548
Patrick L Anderson, Richard J Hendrick, Margaret F Rox, Robert J Webster
Understanding elastic instability has been a recent focus of concentric tube robot research. Modeling advances have enabled prediction of when instabilities will occur and produced metrics for the stability of the robot during use. In this paper, we show how these metrics can be used to resolve redundancy to avoid elastic instability, opening the door for the practical use of higher curvature designs than have previously been possible. We demonstrate the effectiveness of the approach using a three-tube robot that is stabilized by redundancy resolution when following trajectories that would otherwise result in elastic instabilities. We also show that it is stabilized when teleoperated in ways that otherwise produce elastic instabilities. Lastly, we show that the redundancy resolution framework presented here can be applied to other control objectives useful for surgical robots, such as maximizing or minimizing compliance in desired directions.
{"title":"Exceeding traditional curvature limits of concentric tube robots through redundancy resolution","authors":"Patrick L Anderson, Richard J Hendrick, Margaret F Rox, Robert J Webster","doi":"10.1177/02783649231202548","DOIUrl":"https://doi.org/10.1177/02783649231202548","url":null,"abstract":"Understanding elastic instability has been a recent focus of concentric tube robot research. Modeling advances have enabled prediction of when instabilities will occur and produced metrics for the stability of the robot during use. In this paper, we show how these metrics can be used to resolve redundancy to avoid elastic instability, opening the door for the practical use of higher curvature designs than have previously been possible. We demonstrate the effectiveness of the approach using a three-tube robot that is stabilized by redundancy resolution when following trajectories that would otherwise result in elastic instabilities. We also show that it is stabilized when teleoperated in ways that otherwise produce elastic instabilities. Lastly, we show that the redundancy resolution framework presented here can be applied to other control objectives useful for surgical robots, such as maximizing or minimizing compliance in desired directions.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135474919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-03DOI: 10.1177/02783649231201437
Hongkai Dai, Alexandre Amice, Peter Werner, Annan Zhang, Russ Tedrake
Understanding the geometry of collision-free configuration space (C-free) in the presence of Cartesian-space obstacles is an essential ingredient for collision-free motion planning. While it is possible to check for collisions at a point using standard algorithms, to date no practical method exists for computing C-free regions with rigorous certificates due to the complexity of mapping Cartesian-space obstacles through the kinematics. In this work, we present the first to our knowledge rigorous method for approximately decomposing a rational parametrization of C-free into certified polyhedral regions. Our method, called C-Iris (C-space Iterative Regional Inflation by Semidefinite programming), generates large, convex polytopes in a rational parameterization of the configuration space which are rigorously certified to be collision-free. Such regions have been shown to be useful for both optimization-based and randomized motion planning. Based on convex optimization, our method works in arbitrary dimensions, only makes assumptions about the convexity of the obstacles in the 3D Cartesian space, and is fast enough to scale to realistic problems in manipulation. We demonstrate our algorithm’s ability to fill a non-trivial amount of collision-free C-space in several 2-DOF examples where the C-space can be visualized, as well as the scalability of our algorithm on a 7-DOF KUKA iiwa, a 6-DOF UR3e, and 12-DOF bimanual manipulators. An implementation of our algorithm is open-sourced in Drake . We furthermore provide examples of our algorithm in interactive Python notebooks .
{"title":"Certified polyhedral decompositions of collision-free configuration space","authors":"Hongkai Dai, Alexandre Amice, Peter Werner, Annan Zhang, Russ Tedrake","doi":"10.1177/02783649231201437","DOIUrl":"https://doi.org/10.1177/02783649231201437","url":null,"abstract":"Understanding the geometry of collision-free configuration space (C-free) in the presence of Cartesian-space obstacles is an essential ingredient for collision-free motion planning. While it is possible to check for collisions at a point using standard algorithms, to date no practical method exists for computing C-free regions with rigorous certificates due to the complexity of mapping Cartesian-space obstacles through the kinematics. In this work, we present the first to our knowledge rigorous method for approximately decomposing a rational parametrization of C-free into certified polyhedral regions. Our method, called C-Iris (C-space Iterative Regional Inflation by Semidefinite programming), generates large, convex polytopes in a rational parameterization of the configuration space which are rigorously certified to be collision-free. Such regions have been shown to be useful for both optimization-based and randomized motion planning. Based on convex optimization, our method works in arbitrary dimensions, only makes assumptions about the convexity of the obstacles in the 3D Cartesian space, and is fast enough to scale to realistic problems in manipulation. We demonstrate our algorithm’s ability to fill a non-trivial amount of collision-free C-space in several 2-DOF examples where the C-space can be visualized, as well as the scalability of our algorithm on a 7-DOF KUKA iiwa, a 6-DOF UR3e, and 12-DOF bimanual manipulators. An implementation of our algorithm is open-sourced in Drake . We furthermore provide examples of our algorithm in interactive Python notebooks .","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135873889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-02DOI: 10.1177/02783649231207974
Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jaehong Kim
Social robots facilitate improved human–robot interactions through nonverbal behaviors such as handshakes or hugs. However, the traditional methods, which rely on precoded motions, are predictable and can detract from the perception of robots as interactive agents. To address this issue, we have introduced a Seq2Seq-based neural network model that learns social behaviors from human–human interactions in an end-to-end manner. To mitigate the risk of invalid pose sequences during long-term behavior generation, we incorporated a generative adversarial network (GAN). This proposed method was tested using the humanoid robot, Pepper, in a simulated environment. Given the challenges in assessing the success of social behavior generation, we devised novel metrics to quantify the discrepancy between the generated and ground-truth behaviors. Our analysis reveals the impact of different networks on behavior generation performance and compares the efficacy of learning multiple behaviors versus a single behavior. We anticipate that our method will find application in various sectors, including home service, guide, delivery, educational, and virtual robots, thereby enhancing user interaction and enjoyment.
{"title":"Nonverbal social behavior generation for social robots using end-to-end learning","authors":"Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jaehong Kim","doi":"10.1177/02783649231207974","DOIUrl":"https://doi.org/10.1177/02783649231207974","url":null,"abstract":"Social robots facilitate improved human–robot interactions through nonverbal behaviors such as handshakes or hugs. However, the traditional methods, which rely on precoded motions, are predictable and can detract from the perception of robots as interactive agents. To address this issue, we have introduced a Seq2Seq-based neural network model that learns social behaviors from human–human interactions in an end-to-end manner. To mitigate the risk of invalid pose sequences during long-term behavior generation, we incorporated a generative adversarial network (GAN). This proposed method was tested using the humanoid robot, Pepper, in a simulated environment. Given the challenges in assessing the success of social behavior generation, we devised novel metrics to quantify the discrepancy between the generated and ground-truth behaviors. Our analysis reveals the impact of different networks on behavior generation performance and compares the efficacy of learning multiple behaviors versus a single behavior. We anticipate that our method will find application in various sectors, including home service, guide, delivery, educational, and virtual robots, thereby enhancing user interaction and enjoyment.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135875571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}