Pub Date : 2024-09-01Epub Date: 2024-04-03DOI: 10.1177/02783649241230993
Charles Champagne Cossette, Mohammed Ayman Shalaby, David Saussié, James Richard Forbes
This paper addresses the problem of decentralized, collaborative state estimation in robotic teams. In particular, this paper considers problems where individual robots estimate similar physical quantities, such as each other's position relative to themselves. The use of pseudomeasurements is introduced as a means of modeling such relationships between robots' state estimates and is shown to be a tractable way to approach the decentralized state estimation problem. Moreover, this formulation easily leads to a general-purpose observability test that simultaneously accounts for measurements that robots collect from their own sensors, as well as the communication structure within the team. Finally, input preintegration is proposed as a communication-efficient way of sharing odometry information between robots, and the entire theory is appropriate for both vector-space and Lie-group state definitions. To overcome the need for communicating preintegrated covariance information, a deep autoencoder is proposed that reconstructs the covariance information from the inputs, hence further reducing the communication requirements. The proposed framework is evaluated on three different simulated problems, and one experiment involving three quadcopters.
{"title":"Decentralized state estimation: An approach using pseudomeasurements and preintegration.","authors":"Charles Champagne Cossette, Mohammed Ayman Shalaby, David Saussié, James Richard Forbes","doi":"10.1177/02783649241230993","DOIUrl":"https://doi.org/10.1177/02783649241230993","url":null,"abstract":"<p><p>This paper addresses the problem of decentralized, collaborative state estimation in robotic teams. In particular, this paper considers problems where individual robots estimate similar physical quantities, such as each other's position relative to themselves. The use of <i>pseudomeasurements</i> is introduced as a means of modeling such relationships between robots' state estimates and is shown to be a tractable way to approach the decentralized state estimation problem. Moreover, this formulation easily leads to a general-purpose observability test that simultaneously accounts for measurements that robots collect from their own sensors, as well as the communication structure within the team. Finally, input preintegration is proposed as a communication-efficient way of sharing odometry information between robots, and the entire theory is appropriate for both vector-space and Lie-group state definitions. To overcome the need for communicating preintegrated covariance information, a deep autoencoder is proposed that reconstructs the covariance information from the inputs, hence further reducing the communication requirements. The proposed framework is evaluated on three different simulated problems, and one experiment involving three quadcopters.</p>","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":"43 10","pages":"1573-1593"},"PeriodicalIF":7.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11455620/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-14DOI: 10.1177/02783649231210593
Inrak Choi, Sohee John Yoon, Yong-Lae Park
Muscles in animals and actuation systems in advanced robots consist not of the actuation component alone; the motive, dissipative, and proprioceptive components exist in a complete set to achieve versatile and precise manipulation tasks. We present such a system as a linear electrostatic actuator package incorporated with sensing and braking components. Our modular actuator design is composed of these actuator films and a dielectric fluid, and we examine the performance of the proposed system both theoretically and experimentally. In addition, we introduce a mechanism of optical proprioceptive sensing utilizing the Moiré pattern innately generated on the actuator surface, which allows high-resolution reading of the position of the actuator without noise. The optical sensor is also capable of measuring the force exerted by the actuator. Lastly, we add an electroadhesive brake in the package in parallel with the actuator, introducing a method of mode switching that utilizes all three components and presenting control demonstrations with a robot arm. Our actuation system is compact and flexible and can be easily integrated with various robotic applications.
{"title":"Linear electrostatic actuators with Moiré-effect optical proprioceptive sensing and electroadhesive braking","authors":"Inrak Choi, Sohee John Yoon, Yong-Lae Park","doi":"10.1177/02783649231210593","DOIUrl":"https://doi.org/10.1177/02783649231210593","url":null,"abstract":"Muscles in animals and actuation systems in advanced robots consist not of the actuation component alone; the motive, dissipative, and proprioceptive components exist in a complete set to achieve versatile and precise manipulation tasks. We present such a system as a linear electrostatic actuator package incorporated with sensing and braking components. Our modular actuator design is composed of these actuator films and a dielectric fluid, and we examine the performance of the proposed system both theoretically and experimentally. In addition, we introduce a mechanism of optical proprioceptive sensing utilizing the Moiré pattern innately generated on the actuator surface, which allows high-resolution reading of the position of the actuator without noise. The optical sensor is also capable of measuring the force exerted by the actuator. Lastly, we add an electroadhesive brake in the package in parallel with the actuator, introducing a method of mode switching that utilizes all three components and presenting control demonstrations with a robot arm. Our actuation system is compact and flexible and can be easily integrated with various robotic applications.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":"46 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134900916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-10DOI: 10.1177/02783649231215372
Jose Cuaran, Andres Eduardo Baquero Velasquez, Mateus Valverde Gasparino, Naveen Kumar Uppalapati, Arun Narenthiran Sivakumar, Justin Wasserman, Muhammad Huzaifa, Sarita Adve, Girish Chowdhary
Simultaneous localization and mapping (SLAM) has been an active research problem over recent decades. Many leading solutions are available that can achieve remarkable performance in environments with familiar structure, such as indoors and cities. However, our work shows that these leading systems fail in an agricultural setting, particularly in under the canopy navigation in the largest-in-acreage crops of the world: corn ( Zea mays) and soybean ( Glycine max). The presence of plenty of visual clutter due to leaves, varying illumination, and stark visual similarity makes these environments lose the familiar structure on which SLAM algorithms rely on. To advance SLAM in such unstructured agricultural environments, we present a comprehensive agricultural dataset. Our open dataset consists of stereo images, IMUs, wheel encoders, and GPS measurements continuously recorded from a mobile robot in corn and soybean fields across different growth stages. In addition, we present best-case benchmark results for several leading visual-inertial odometry and SLAM systems. Our data and benchmark clearly show that there is significant research promise in SLAM for agricultural settings. The dataset is available online at: https://github.com/jrcuaranv/terrasentia-dataset .
{"title":"Under-canopy dataset for advancing simultaneous localization and mapping in agricultural robotics","authors":"Jose Cuaran, Andres Eduardo Baquero Velasquez, Mateus Valverde Gasparino, Naveen Kumar Uppalapati, Arun Narenthiran Sivakumar, Justin Wasserman, Muhammad Huzaifa, Sarita Adve, Girish Chowdhary","doi":"10.1177/02783649231215372","DOIUrl":"https://doi.org/10.1177/02783649231215372","url":null,"abstract":"Simultaneous localization and mapping (SLAM) has been an active research problem over recent decades. Many leading solutions are available that can achieve remarkable performance in environments with familiar structure, such as indoors and cities. However, our work shows that these leading systems fail in an agricultural setting, particularly in under the canopy navigation in the largest-in-acreage crops of the world: corn ( Zea mays) and soybean ( Glycine max). The presence of plenty of visual clutter due to leaves, varying illumination, and stark visual similarity makes these environments lose the familiar structure on which SLAM algorithms rely on. To advance SLAM in such unstructured agricultural environments, we present a comprehensive agricultural dataset. Our open dataset consists of stereo images, IMUs, wheel encoders, and GPS measurements continuously recorded from a mobile robot in corn and soybean fields across different growth stages. In addition, we present best-case benchmark results for several leading visual-inertial odometry and SLAM systems. Our data and benchmark clearly show that there is significant research promise in SLAM for agricultural settings. The dataset is available online at: https://github.com/jrcuaranv/terrasentia-dataset .","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":"87 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135091453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-09DOI: 10.1177/02783649231213117
Jeongyun Kim, Myung-Hwan Jeon, Sangwoo Jung, Wooseong Yang, Minwoo Jung, Jaeho Shin, Ayoung Kim
Transparent objects are encountered frequently in our daily lives, yet recognizing them poses challenges for conventional vision sensors due to their unique material properties, not being well perceived from RGB or depth cameras. Overcoming this limitation, thermal infrared cameras have emerged as a solution, offering improved visibility and shape information for transparent objects. In this paper, we present TRansPose, the first large-scale multispectral dataset that combines stereo RGB-D, thermal infrared (TIR) images, and object poses to promote transparent object research. The dataset includes 99 transparent objects, encompassing 43 household items, 27 recyclable trashes, 29 chemical laboratory equivalents, and 12 non-transparent objects. It comprises a vast collection of 333,819 images and 4,000,056 annotations, providing instance-level segmentation masks, ground-truth poses, and completed depth information. The data was acquired using an FLIR A65 thermal infrared camera, two Intel RealSense L515 RGB-D cameras, and a Franka Emika Panda robot manipulator. Spanning 87 sequences, TRansPose covers various challenging real-life scenarios, including objects filled with water, diverse lighting conditions, heavy clutter, non-transparent or translucent containers, objects in plastic bags, and multi-stacked objects. Supplementary material can be accessed from the following link: https://sites.google.com/view/transpose-dataset .
{"title":"TRansPose: Large-scale multispectral dataset for transparent object","authors":"Jeongyun Kim, Myung-Hwan Jeon, Sangwoo Jung, Wooseong Yang, Minwoo Jung, Jaeho Shin, Ayoung Kim","doi":"10.1177/02783649231213117","DOIUrl":"https://doi.org/10.1177/02783649231213117","url":null,"abstract":"Transparent objects are encountered frequently in our daily lives, yet recognizing them poses challenges for conventional vision sensors due to their unique material properties, not being well perceived from RGB or depth cameras. Overcoming this limitation, thermal infrared cameras have emerged as a solution, offering improved visibility and shape information for transparent objects. In this paper, we present TRansPose, the first large-scale multispectral dataset that combines stereo RGB-D, thermal infrared (TIR) images, and object poses to promote transparent object research. The dataset includes 99 transparent objects, encompassing 43 household items, 27 recyclable trashes, 29 chemical laboratory equivalents, and 12 non-transparent objects. It comprises a vast collection of 333,819 images and 4,000,056 annotations, providing instance-level segmentation masks, ground-truth poses, and completed depth information. The data was acquired using an FLIR A65 thermal infrared camera, two Intel RealSense L515 RGB-D cameras, and a Franka Emika Panda robot manipulator. Spanning 87 sequences, TRansPose covers various challenging real-life scenarios, including objects filled with water, diverse lighting conditions, heavy clutter, non-transparent or translucent containers, objects in plastic bags, and multi-stacked objects. Supplementary material can be accessed from the following link: https://sites.google.com/view/transpose-dataset .","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":" 44","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135240935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-09DOI: 10.1177/02783649231209337
Andreas Orthey, Sohaib Akbar, Marc Toussaint
High-dimensional motion planning problems can often be solved significantly faster by using multilevel abstractions. While there are various ways to formally capture multilevel abstractions, we formulate them in terms of fiber bundles. Fiber bundles essentially describe lower-dimensional projections of the state space using local product spaces, which allows us to concisely describe and derive novel algorithms in terms of bundle restrictions and bundle sections. Given such a structure and a corresponding admissible constraint function, we develop highly efficient and asymptotically optimal sampling-based motion planning methods for high-dimensional state spaces. Those methods exploit the structure of fiber bundles through the use of bundle primitives. Those primitives are used to create novel bundle planners, the rapidly-exploring quotient space trees (QRRT*), and the quotient space roadmap planner (QMP*). Both planners are shown to be probabilistically complete and almost-surely asymptotically optimal. To evaluate our bundle planners, we compare them against classical sampling-based planners on benchmarks of four low-dimensional scenarios, and eight high-dimensional scenarios, ranging from 21 to 100 degrees of freedom, including multiple robots and nonholonomic constraints. Our findings show improvements up to two to six orders of magnitude and underline the efficiency of multilevel motion planners and the benefit of exploiting multilevel abstractions using the terminology of fiber bundles.
{"title":"Multilevel motion planning: A fiber bundle formulation","authors":"Andreas Orthey, Sohaib Akbar, Marc Toussaint","doi":"10.1177/02783649231209337","DOIUrl":"https://doi.org/10.1177/02783649231209337","url":null,"abstract":"High-dimensional motion planning problems can often be solved significantly faster by using multilevel abstractions. While there are various ways to formally capture multilevel abstractions, we formulate them in terms of fiber bundles. Fiber bundles essentially describe lower-dimensional projections of the state space using local product spaces, which allows us to concisely describe and derive novel algorithms in terms of bundle restrictions and bundle sections. Given such a structure and a corresponding admissible constraint function, we develop highly efficient and asymptotically optimal sampling-based motion planning methods for high-dimensional state spaces. Those methods exploit the structure of fiber bundles through the use of bundle primitives. Those primitives are used to create novel bundle planners, the rapidly-exploring quotient space trees (QRRT*), and the quotient space roadmap planner (QMP*). Both planners are shown to be probabilistically complete and almost-surely asymptotically optimal. To evaluate our bundle planners, we compare them against classical sampling-based planners on benchmarks of four low-dimensional scenarios, and eight high-dimensional scenarios, ranging from 21 to 100 degrees of freedom, including multiple robots and nonholonomic constraints. Our findings show improvements up to two to six orders of magnitude and underline the efficiency of multilevel motion planners and the benefit of exploiting multilevel abstractions using the terminology of fiber bundles.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":" 34","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135191688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-07DOI: 10.1177/02783649231207655
Guozheng Lu, Yixi Cai, Nan Chen, Fanze Kong, Yunfan Ren, Fu Zhang
We address the theoretical and practical problems related to the trajectory generation and tracking control of tail-sitter UAVs. Theoretically, we focus on the differential flatness property with full exploitation of actual UAV aerodynamic models, which lays a foundation for generating dynamically feasible trajectory and achieving high-performance tracking control. We have found that a tail-sitter is differentially flat with accurate (not simplified) aerodynamic models within the entire flight envelope, by specifying coordinate flight condition and choosing the vehicle position as the flat output. This fundamental property allows us to fully exploit the high-fidelity aerodynamic models in the trajectory planning and tracking control to achieve accurate tail-sitter flights. Particularly, an optimization-based trajectory planner for tail-sitters is proposed to design high-quality, smooth trajectories with consideration of kinodynamic constraints, singularity-free constraints, and actuator saturation. The planned trajectory of flat output is transformed into state trajectory in real time with optional consideration of wind in environments. To track the state trajectory, a global, singularity-free, and minimally parameterized on-manifold MPC is developed, which fully leverages the accurate aerodynamic model to achieve high-accuracy trajectory tracking within the whole flight envelope. The proposed algorithms are implemented on our quadrotor tail-sitter prototype, “Hong Hu,” and their effectiveness are demonstrated through extensive real-world experiments in both indoor and outdoor field tests, including agile SE(3) flight through consecutive narrow windows requiring specific attitude and with speed up to 10 m/s, typical tail-sitter maneuvers (transition, level flight, and loiter) with speed up to 20 m/s, and extremely aggressive aerobatic maneuvers (Wingover, Loop, Vertical Eight, and Cuban Eight) with acceleration up to 2.5 g. The video demonstration is available at https://youtu.be/2x_bLbVuyrk .
{"title":"Trajectory generation and tracking control for aggressive tail-sitter flights","authors":"Guozheng Lu, Yixi Cai, Nan Chen, Fanze Kong, Yunfan Ren, Fu Zhang","doi":"10.1177/02783649231207655","DOIUrl":"https://doi.org/10.1177/02783649231207655","url":null,"abstract":"We address the theoretical and practical problems related to the trajectory generation and tracking control of tail-sitter UAVs. Theoretically, we focus on the differential flatness property with full exploitation of actual UAV aerodynamic models, which lays a foundation for generating dynamically feasible trajectory and achieving high-performance tracking control. We have found that a tail-sitter is differentially flat with accurate (not simplified) aerodynamic models within the entire flight envelope, by specifying coordinate flight condition and choosing the vehicle position as the flat output. This fundamental property allows us to fully exploit the high-fidelity aerodynamic models in the trajectory planning and tracking control to achieve accurate tail-sitter flights. Particularly, an optimization-based trajectory planner for tail-sitters is proposed to design high-quality, smooth trajectories with consideration of kinodynamic constraints, singularity-free constraints, and actuator saturation. The planned trajectory of flat output is transformed into state trajectory in real time with optional consideration of wind in environments. To track the state trajectory, a global, singularity-free, and minimally parameterized on-manifold MPC is developed, which fully leverages the accurate aerodynamic model to achieve high-accuracy trajectory tracking within the whole flight envelope. The proposed algorithms are implemented on our quadrotor tail-sitter prototype, “Hong Hu,” and their effectiveness are demonstrated through extensive real-world experiments in both indoor and outdoor field tests, including agile SE(3) flight through consecutive narrow windows requiring specific attitude and with speed up to 10 m/s, typical tail-sitter maneuvers (transition, level flight, and loiter) with speed up to 20 m/s, and extremely aggressive aerobatic maneuvers (Wingover, Loop, Vertical Eight, and Cuban Eight) with acceleration up to 2.5 g. The video demonstration is available at https://youtu.be/2x_bLbVuyrk .","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":"78 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135539835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-07DOI: 10.1177/02783649231210012
Pengda Mao, Rao Fu, Quan Quan
This paper presents a novel method for efficiently solving a trajectory planning problem for swarm robotics in cluttered environments. Recent research has demonstrated high success rates in real-time local trajectory planning for swarm robotics in cluttered environments, but optimizing trajectories for each robot is still computationally expensive, with a computational complexity from [Formula: see text] to [Formula: see text] where [Formula: see text] is the number of parameters in the parameterized trajectory, [Formula: see text] is precision, and [Formula: see text] is the number of iterations with respect to [Formula: see text] and [Formula: see text]. Furthermore, the swarm is difficult to move as a group. To address this issue, we define and then construct the optimal virtual tube, which includes infinite optimal trajectories. Under certain conditions, any optimal trajectory in the optimal virtual tube can be expressed as a convex combination of a finite number of optimal trajectories, with a computational complexity of [Formula: see text]. Afterward, a hierarchical approach including a planning method of the optimal virtual tube with minimizing energy and distributed model predictive control is proposed. In simulations and experiments, the proposed approach is validated and its effectiveness over other methods is demonstrated through comparison.
{"title":"Optimal virtual tube planning and control for swarm robotics","authors":"Pengda Mao, Rao Fu, Quan Quan","doi":"10.1177/02783649231210012","DOIUrl":"https://doi.org/10.1177/02783649231210012","url":null,"abstract":"This paper presents a novel method for efficiently solving a trajectory planning problem for swarm robotics in cluttered environments. Recent research has demonstrated high success rates in real-time local trajectory planning for swarm robotics in cluttered environments, but optimizing trajectories for each robot is still computationally expensive, with a computational complexity from [Formula: see text] to [Formula: see text] where [Formula: see text] is the number of parameters in the parameterized trajectory, [Formula: see text] is precision, and [Formula: see text] is the number of iterations with respect to [Formula: see text] and [Formula: see text]. Furthermore, the swarm is difficult to move as a group. To address this issue, we define and then construct the optimal virtual tube, which includes infinite optimal trajectories. Under certain conditions, any optimal trajectory in the optimal virtual tube can be expressed as a convex combination of a finite number of optimal trajectories, with a computational complexity of [Formula: see text]. Afterward, a hierarchical approach including a planning method of the optimal virtual tube with minimizing energy and distributed model predictive control is proposed. In simulations and experiments, the proposed approach is validated and its effectiveness over other methods is demonstrated through comparison.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":"293 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135474810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-07DOI: 10.1177/02783649231202548
Patrick L Anderson, Richard J Hendrick, Margaret F Rox, Robert J Webster
Understanding elastic instability has been a recent focus of concentric tube robot research. Modeling advances have enabled prediction of when instabilities will occur and produced metrics for the stability of the robot during use. In this paper, we show how these metrics can be used to resolve redundancy to avoid elastic instability, opening the door for the practical use of higher curvature designs than have previously been possible. We demonstrate the effectiveness of the approach using a three-tube robot that is stabilized by redundancy resolution when following trajectories that would otherwise result in elastic instabilities. We also show that it is stabilized when teleoperated in ways that otherwise produce elastic instabilities. Lastly, we show that the redundancy resolution framework presented here can be applied to other control objectives useful for surgical robots, such as maximizing or minimizing compliance in desired directions.
{"title":"Exceeding traditional curvature limits of concentric tube robots through redundancy resolution","authors":"Patrick L Anderson, Richard J Hendrick, Margaret F Rox, Robert J Webster","doi":"10.1177/02783649231202548","DOIUrl":"https://doi.org/10.1177/02783649231202548","url":null,"abstract":"Understanding elastic instability has been a recent focus of concentric tube robot research. Modeling advances have enabled prediction of when instabilities will occur and produced metrics for the stability of the robot during use. In this paper, we show how these metrics can be used to resolve redundancy to avoid elastic instability, opening the door for the practical use of higher curvature designs than have previously been possible. We demonstrate the effectiveness of the approach using a three-tube robot that is stabilized by redundancy resolution when following trajectories that would otherwise result in elastic instabilities. We also show that it is stabilized when teleoperated in ways that otherwise produce elastic instabilities. Lastly, we show that the redundancy resolution framework presented here can be applied to other control objectives useful for surgical robots, such as maximizing or minimizing compliance in desired directions.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":"316 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135474919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-07DOI: 10.1177/02783649231208729
Erdem Bıyık, Nicolas Huynh, Mykel J. Kochenderfer, Dorsa Sadigh
Designing reward functions is a difficult task in AI and robotics. The complex task of directly specifying all the desirable behaviors a robot needs to optimize often proves challenging for humans. A popular solution is to learn reward functions using expert demonstrations. This approach, however, is fraught with many challenges. Some methods require heavily structured models, for example, reward functions that are linear in some predefined set of features, while others adopt less structured reward functions that may necessitate tremendous amounts of data. Moreover, it is difficult for humans to provide demonstrations on robots with high degrees of freedom, or even quantifying reward values for given trajectories. To address these challenges, we present a preference-based learning approach, where human feedback is in the form of comparisons between trajectories. We do not assume highly constrained structures on the reward function. Instead, we employ a Gaussian process to model the reward function and propose a mathematical formulation to actively fit the model using only human preferences. Our approach enables us to tackle both inflexibility and data-inefficiency problems within a preference-based learning framework. We further analyze our algorithm in comparison to several baselines on reward optimization, where the goal is to find the optimal robot trajectory in a data-efficient way instead of learning the reward function for every possible trajectory. Our results in three different simulation experiments and a user study show our approach can efficiently learn expressive reward functions for robotic tasks, and outperform the baselines in both reward learning and reward optimization.
{"title":"Active preference-based Gaussian process regression for reward learning and optimization","authors":"Erdem Bıyık, Nicolas Huynh, Mykel J. Kochenderfer, Dorsa Sadigh","doi":"10.1177/02783649231208729","DOIUrl":"https://doi.org/10.1177/02783649231208729","url":null,"abstract":"Designing reward functions is a difficult task in AI and robotics. The complex task of directly specifying all the desirable behaviors a robot needs to optimize often proves challenging for humans. A popular solution is to learn reward functions using expert demonstrations. This approach, however, is fraught with many challenges. Some methods require heavily structured models, for example, reward functions that are linear in some predefined set of features, while others adopt less structured reward functions that may necessitate tremendous amounts of data. Moreover, it is difficult for humans to provide demonstrations on robots with high degrees of freedom, or even quantifying reward values for given trajectories. To address these challenges, we present a preference-based learning approach, where human feedback is in the form of comparisons between trajectories. We do not assume highly constrained structures on the reward function. Instead, we employ a Gaussian process to model the reward function and propose a mathematical formulation to actively fit the model using only human preferences. Our approach enables us to tackle both inflexibility and data-inefficiency problems within a preference-based learning framework. We further analyze our algorithm in comparison to several baselines on reward optimization, where the goal is to find the optimal robot trajectory in a data-efficient way instead of learning the reward function for every possible trajectory. Our results in three different simulation experiments and a user study show our approach can efficiently learn expressive reward functions for robotic tasks, and outperform the baselines in both reward learning and reward optimization.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135475859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-06DOI: 10.1177/02783649231210592
C. David Remy, Zachary Brei, Daniel Bruder, Jan Remy, Keith Buffinton, R. Brent Gillespie
In this paper, we introduce the concept of the Fluid Jacobian, which provides a description of the power transmission that operates between the fluid and mechanical domains in soft robotic systems. It can be understood as a generalization of the traditional kinematic Jacobian that relates the joint space torques and velocities to the task space forces and velocities of a robot. In a similar way, the Fluid Jacobian relates fluid pressure to task space forces and fluid flow to task space velocities. In addition, the Fluid Jacobian can also be regarded as a generalization of the piston cross-sectional area in a fluid-driven cylinder that extends to complex geometries and multiple dimensions. In the following, we present a theoretical derivation of this framework, focus on important special cases, and illustrate the meaning and practical applicability of the Fluid Jacobian in four brief examples.
{"title":"The “Fluid Jacobian”: Modeling force-motion relationships in fluid-driven soft robots","authors":"C. David Remy, Zachary Brei, Daniel Bruder, Jan Remy, Keith Buffinton, R. Brent Gillespie","doi":"10.1177/02783649231210592","DOIUrl":"https://doi.org/10.1177/02783649231210592","url":null,"abstract":"In this paper, we introduce the concept of the Fluid Jacobian, which provides a description of the power transmission that operates between the fluid and mechanical domains in soft robotic systems. It can be understood as a generalization of the traditional kinematic Jacobian that relates the joint space torques and velocities to the task space forces and velocities of a robot. In a similar way, the Fluid Jacobian relates fluid pressure to task space forces and fluid flow to task space velocities. In addition, the Fluid Jacobian can also be regarded as a generalization of the piston cross-sectional area in a fluid-driven cylinder that extends to complex geometries and multiple dimensions. In the following, we present a theoretical derivation of this framework, focus on important special cases, and illustrate the meaning and practical applicability of the Fluid Jacobian in four brief examples.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":"21 20","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135684813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}