首页 > 最新文献

International Journal of Robotics Research最新文献

英文 中文
Decentralized state estimation: An approach using pseudomeasurements and preintegration. 分散状态估计:使用伪测量和预积分的方法
IF 7.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2024-09-01 Epub Date: 2024-04-03 DOI: 10.1177/02783649241230993
Charles Champagne Cossette, Mohammed Ayman Shalaby, David Saussié, James Richard Forbes

This paper addresses the problem of decentralized, collaborative state estimation in robotic teams. In particular, this paper considers problems where individual robots estimate similar physical quantities, such as each other's position relative to themselves. The use of pseudomeasurements is introduced as a means of modeling such relationships between robots' state estimates and is shown to be a tractable way to approach the decentralized state estimation problem. Moreover, this formulation easily leads to a general-purpose observability test that simultaneously accounts for measurements that robots collect from their own sensors, as well as the communication structure within the team. Finally, input preintegration is proposed as a communication-efficient way of sharing odometry information between robots, and the entire theory is appropriate for both vector-space and Lie-group state definitions. To overcome the need for communicating preintegrated covariance information, a deep autoencoder is proposed that reconstructs the covariance information from the inputs, hence further reducing the communication requirements. The proposed framework is evaluated on three different simulated problems, and one experiment involving three quadcopters.

本文探讨了机器人团队中的分散协作状态估计问题。本文特别考虑了单个机器人估计类似物理量的问题,例如彼此相对于自身的位置。本文介绍了使用伪测量来模拟机器人状态估计之间的这种关系的方法,并证明这是处理分散状态估计问题的一种可行方法。此外,这种表述方式还能轻松实现通用的可观测性测试,同时考虑到机器人从自身传感器收集的测量数据以及团队内部的通信结构。最后,输入预积分被提出作为机器人之间共享里程测量信息的一种高效通信方式,整个理论适用于矢量空间和李群状态定义。为了克服通信预集成协方差信息的需要,提出了一种深度自动编码器,它能从输入中重建协方差信息,从而进一步降低通信要求。我们在三个不同的模拟问题和一个涉及三架四旋翼飞行器的实验中对所提出的框架进行了评估。
{"title":"Decentralized state estimation: An approach using pseudomeasurements and preintegration.","authors":"Charles Champagne Cossette, Mohammed Ayman Shalaby, David Saussié, James Richard Forbes","doi":"10.1177/02783649241230993","DOIUrl":"https://doi.org/10.1177/02783649241230993","url":null,"abstract":"<p><p>This paper addresses the problem of decentralized, collaborative state estimation in robotic teams. In particular, this paper considers problems where individual robots estimate similar physical quantities, such as each other's position relative to themselves. The use of <i>pseudomeasurements</i> is introduced as a means of modeling such relationships between robots' state estimates and is shown to be a tractable way to approach the decentralized state estimation problem. Moreover, this formulation easily leads to a general-purpose observability test that simultaneously accounts for measurements that robots collect from their own sensors, as well as the communication structure within the team. Finally, input preintegration is proposed as a communication-efficient way of sharing odometry information between robots, and the entire theory is appropriate for both vector-space and Lie-group state definitions. To overcome the need for communicating preintegrated covariance information, a deep autoencoder is proposed that reconstructs the covariance information from the inputs, hence further reducing the communication requirements. The proposed framework is evaluated on three different simulated problems, and one experiment involving three quadcopters.</p>","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":7.5,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11455620/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142395423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Linear electrostatic actuators with Moiré-effect optical proprioceptive sensing and electroadhesive braking 具有莫伊兰效应光学本体感觉传感和电粘制动的线性静电致动器
1区 计算机科学 Q1 Mathematics Pub Date : 2023-11-14 DOI: 10.1177/02783649231210593
Inrak Choi, Sohee John Yoon, Yong-Lae Park
Muscles in animals and actuation systems in advanced robots consist not of the actuation component alone; the motive, dissipative, and proprioceptive components exist in a complete set to achieve versatile and precise manipulation tasks. We present such a system as a linear electrostatic actuator package incorporated with sensing and braking components. Our modular actuator design is composed of these actuator films and a dielectric fluid, and we examine the performance of the proposed system both theoretically and experimentally. In addition, we introduce a mechanism of optical proprioceptive sensing utilizing the Moiré pattern innately generated on the actuator surface, which allows high-resolution reading of the position of the actuator without noise. The optical sensor is also capable of measuring the force exerted by the actuator. Lastly, we add an electroadhesive brake in the package in parallel with the actuator, introducing a method of mode switching that utilizes all three components and presenting control demonstrations with a robot arm. Our actuation system is compact and flexible and can be easily integrated with various robotic applications.
动物的肌肉和高级机器人的驱动系统不仅仅由驱动部分组成;动机,耗散和本体感觉组件存在于一个完整的集合,以实现多功能和精确的操作任务。我们提出这样一个系统作为一个线性静电致动器包集成传感和制动组件。我们的模块化致动器设计由这些致动器薄膜和介电流体组成,我们从理论上和实验上检验了所提出系统的性能。此外,我们引入了一种光学本体感知机制,利用驱动器表面上固有的moir模式,可以在没有噪声的情况下高分辨率地读取驱动器的位置。光学传感器还能够测量执行器施加的力。最后,我们在封装中添加了一个与执行器并行的电粘合制动器,引入了一种利用所有三个组件的模式切换方法,并用机械臂进行了控制演示。我们的驱动系统紧凑灵活,可以很容易地与各种机器人应用集成。
{"title":"Linear electrostatic actuators with Moiré-effect optical proprioceptive sensing and electroadhesive braking","authors":"Inrak Choi, Sohee John Yoon, Yong-Lae Park","doi":"10.1177/02783649231210593","DOIUrl":"https://doi.org/10.1177/02783649231210593","url":null,"abstract":"Muscles in animals and actuation systems in advanced robots consist not of the actuation component alone; the motive, dissipative, and proprioceptive components exist in a complete set to achieve versatile and precise manipulation tasks. We present such a system as a linear electrostatic actuator package incorporated with sensing and braking components. Our modular actuator design is composed of these actuator films and a dielectric fluid, and we examine the performance of the proposed system both theoretically and experimentally. In addition, we introduce a mechanism of optical proprioceptive sensing utilizing the Moiré pattern innately generated on the actuator surface, which allows high-resolution reading of the position of the actuator without noise. The optical sensor is also capable of measuring the force exerted by the actuator. Lastly, we add an electroadhesive brake in the package in parallel with the actuator, introducing a method of mode switching that utilizes all three components and presenting control demonstrations with a robot arm. Our actuation system is compact and flexible and can be easily integrated with various robotic applications.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134900916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Under-canopy dataset for advancing simultaneous localization and mapping in agricultural robotics 推进农业机器人同步定位与制图的冠下数据集
1区 计算机科学 Q1 Mathematics Pub Date : 2023-11-10 DOI: 10.1177/02783649231215372
Jose Cuaran, Andres Eduardo Baquero Velasquez, Mateus Valverde Gasparino, Naveen Kumar Uppalapati, Arun Narenthiran Sivakumar, Justin Wasserman, Muhammad Huzaifa, Sarita Adve, Girish Chowdhary
Simultaneous localization and mapping (SLAM) has been an active research problem over recent decades. Many leading solutions are available that can achieve remarkable performance in environments with familiar structure, such as indoors and cities. However, our work shows that these leading systems fail in an agricultural setting, particularly in under the canopy navigation in the largest-in-acreage crops of the world: corn ( Zea mays) and soybean ( Glycine max). The presence of plenty of visual clutter due to leaves, varying illumination, and stark visual similarity makes these environments lose the familiar structure on which SLAM algorithms rely on. To advance SLAM in such unstructured agricultural environments, we present a comprehensive agricultural dataset. Our open dataset consists of stereo images, IMUs, wheel encoders, and GPS measurements continuously recorded from a mobile robot in corn and soybean fields across different growth stages. In addition, we present best-case benchmark results for several leading visual-inertial odometry and SLAM systems. Our data and benchmark clearly show that there is significant research promise in SLAM for agricultural settings. The dataset is available online at: https://github.com/jrcuaranv/terrasentia-dataset .
同时定位与制图(SLAM)是近几十年来研究的热点问题。许多领先的解决方案可以在室内和城市等熟悉结构的环境中实现卓越的性能。然而,我们的工作表明,这些领先的系统在农业环境中失败了,特别是在世界上面积最大的作物:玉米(Zea mays)和大豆(Glycine max)的冠下导航中。由于树叶、光照变化和明显的视觉相似性导致的大量视觉混乱的存在,使这些环境失去了SLAM算法所依赖的熟悉结构。为了在这种非结构化的农业环境中推进SLAM,我们提出了一个综合的农业数据集。我们的开放数据集包括立体图像、imu、轮式编码器和GPS测量数据,这些数据来自移动机器人在玉米和大豆田不同生长阶段的连续记录。此外,我们还介绍了几种领先的视觉惯性里程计和SLAM系统的最佳基准测试结果。我们的数据和基准清楚地表明,SLAM在农业领域有很大的研究前景。该数据集可在线获取:https://github.com/jrcuaranv/terrasentia-dataset。
{"title":"Under-canopy dataset for advancing simultaneous localization and mapping in agricultural robotics","authors":"Jose Cuaran, Andres Eduardo Baquero Velasquez, Mateus Valverde Gasparino, Naveen Kumar Uppalapati, Arun Narenthiran Sivakumar, Justin Wasserman, Muhammad Huzaifa, Sarita Adve, Girish Chowdhary","doi":"10.1177/02783649231215372","DOIUrl":"https://doi.org/10.1177/02783649231215372","url":null,"abstract":"Simultaneous localization and mapping (SLAM) has been an active research problem over recent decades. Many leading solutions are available that can achieve remarkable performance in environments with familiar structure, such as indoors and cities. However, our work shows that these leading systems fail in an agricultural setting, particularly in under the canopy navigation in the largest-in-acreage crops of the world: corn ( Zea mays) and soybean ( Glycine max). The presence of plenty of visual clutter due to leaves, varying illumination, and stark visual similarity makes these environments lose the familiar structure on which SLAM algorithms rely on. To advance SLAM in such unstructured agricultural environments, we present a comprehensive agricultural dataset. Our open dataset consists of stereo images, IMUs, wheel encoders, and GPS measurements continuously recorded from a mobile robot in corn and soybean fields across different growth stages. In addition, we present best-case benchmark results for several leading visual-inertial odometry and SLAM systems. Our data and benchmark clearly show that there is significant research promise in SLAM for agricultural settings. The dataset is available online at: https://github.com/jrcuaranv/terrasentia-dataset .","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135091453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multilevel motion planning: A fiber bundle formulation 多层运动规划:一种纤维束公式
1区 计算机科学 Q1 Mathematics Pub Date : 2023-11-09 DOI: 10.1177/02783649231209337
Andreas Orthey, Sohaib Akbar, Marc Toussaint
High-dimensional motion planning problems can often be solved significantly faster by using multilevel abstractions. While there are various ways to formally capture multilevel abstractions, we formulate them in terms of fiber bundles. Fiber bundles essentially describe lower-dimensional projections of the state space using local product spaces, which allows us to concisely describe and derive novel algorithms in terms of bundle restrictions and bundle sections. Given such a structure and a corresponding admissible constraint function, we develop highly efficient and asymptotically optimal sampling-based motion planning methods for high-dimensional state spaces. Those methods exploit the structure of fiber bundles through the use of bundle primitives. Those primitives are used to create novel bundle planners, the rapidly-exploring quotient space trees (QRRT*), and the quotient space roadmap planner (QMP*). Both planners are shown to be probabilistically complete and almost-surely asymptotically optimal. To evaluate our bundle planners, we compare them against classical sampling-based planners on benchmarks of four low-dimensional scenarios, and eight high-dimensional scenarios, ranging from 21 to 100 degrees of freedom, including multiple robots and nonholonomic constraints. Our findings show improvements up to two to six orders of magnitude and underline the efficiency of multilevel motion planners and the benefit of exploiting multilevel abstractions using the terminology of fiber bundles.
通过使用多层抽象,高维运动规划问题通常可以得到更快的解决。虽然有各种方法可以正式捕获多层抽象,但我们根据纤维束将它们公式化。纤维束本质上使用局部积空间描述状态空间的低维投影,这使我们能够根据束限制和束截面简明地描述和推导新的算法。给定这样的结构和相应的允许约束函数,我们开发了高维状态空间的高效且渐近最优的基于采样的运动规划方法。这些方法通过使用纤维束原语来利用纤维束的结构。这些原语用于创建新的包规划器、快速探索商空间树(QRRT*)和商空间路线图规划器(QMP*)。这两个规划都是概率完备的,几乎肯定是渐近最优的。为了评估我们的捆绑规划器,我们在四个低维场景和八个高维场景的基准上将它们与经典的基于抽样的规划器进行了比较,这些场景的自由度从21到100不等,包括多个机器人和非完整约束。我们的研究结果显示了两到六个数量级的改进,并强调了多层运动规划器的效率以及使用纤维束术语开发多层抽象的好处。
{"title":"Multilevel motion planning: A fiber bundle formulation","authors":"Andreas Orthey, Sohaib Akbar, Marc Toussaint","doi":"10.1177/02783649231209337","DOIUrl":"https://doi.org/10.1177/02783649231209337","url":null,"abstract":"High-dimensional motion planning problems can often be solved significantly faster by using multilevel abstractions. While there are various ways to formally capture multilevel abstractions, we formulate them in terms of fiber bundles. Fiber bundles essentially describe lower-dimensional projections of the state space using local product spaces, which allows us to concisely describe and derive novel algorithms in terms of bundle restrictions and bundle sections. Given such a structure and a corresponding admissible constraint function, we develop highly efficient and asymptotically optimal sampling-based motion planning methods for high-dimensional state spaces. Those methods exploit the structure of fiber bundles through the use of bundle primitives. Those primitives are used to create novel bundle planners, the rapidly-exploring quotient space trees (QRRT*), and the quotient space roadmap planner (QMP*). Both planners are shown to be probabilistically complete and almost-surely asymptotically optimal. To evaluate our bundle planners, we compare them against classical sampling-based planners on benchmarks of four low-dimensional scenarios, and eight high-dimensional scenarios, ranging from 21 to 100 degrees of freedom, including multiple robots and nonholonomic constraints. Our findings show improvements up to two to six orders of magnitude and underline the efficiency of multilevel motion planners and the benefit of exploiting multilevel abstractions using the terminology of fiber bundles.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135191688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
TRansPose: Large-scale multispectral dataset for transparent object transse:透明物体的大规模多光谱数据集
1区 计算机科学 Q1 Mathematics Pub Date : 2023-11-09 DOI: 10.1177/02783649231213117
Jeongyun Kim, Myung-Hwan Jeon, Sangwoo Jung, Wooseong Yang, Minwoo Jung, Jaeho Shin, Ayoung Kim
Transparent objects are encountered frequently in our daily lives, yet recognizing them poses challenges for conventional vision sensors due to their unique material properties, not being well perceived from RGB or depth cameras. Overcoming this limitation, thermal infrared cameras have emerged as a solution, offering improved visibility and shape information for transparent objects. In this paper, we present TRansPose, the first large-scale multispectral dataset that combines stereo RGB-D, thermal infrared (TIR) images, and object poses to promote transparent object research. The dataset includes 99 transparent objects, encompassing 43 household items, 27 recyclable trashes, 29 chemical laboratory equivalents, and 12 non-transparent objects. It comprises a vast collection of 333,819 images and 4,000,056 annotations, providing instance-level segmentation masks, ground-truth poses, and completed depth information. The data was acquired using an FLIR A65 thermal infrared camera, two Intel RealSense L515 RGB-D cameras, and a Franka Emika Panda robot manipulator. Spanning 87 sequences, TRansPose covers various challenging real-life scenarios, including objects filled with water, diverse lighting conditions, heavy clutter, non-transparent or translucent containers, objects in plastic bags, and multi-stacked objects. Supplementary material can be accessed from the following link: https://sites.google.com/view/transpose-dataset .
透明物体在我们的日常生活中经常遇到,但由于其独特的材料特性,识别它们对传统视觉传感器构成挑战,无法从RGB或深度相机中很好地感知。为了克服这一限制,热红外摄像机作为一种解决方案出现了,它为透明物体提供了更好的可视性和形状信息。在本文中,我们提出了第一个结合立体RGB-D、热红外(TIR)图像和物体姿态的大规模多光谱数据集transse,以促进透明物体的研究。该数据集包括99个透明物体,包括43个家庭用品、27个可回收垃圾、29个化学实验室等量物和12个非透明物体。它包含大量的333,819张图像和4,000,056个注释,提供实例级分割蒙版,真实姿势和完整的深度信息。数据采集使用一台FLIR A65热红外摄像机、两台Intel RealSense L515 RGB-D摄像机和一台Franka Emika Panda机器人机械手。跨越87个序列,转置涵盖了各种具有挑战性的现实生活场景,包括充满水的物体,不同的照明条件,沉重的杂物,非透明或半透明的容器,塑料袋中的物体和多层物体。补充资料可从以下链接获取:https://sites.google.com/view/transpose-dataset。
{"title":"TRansPose: Large-scale multispectral dataset for transparent object","authors":"Jeongyun Kim, Myung-Hwan Jeon, Sangwoo Jung, Wooseong Yang, Minwoo Jung, Jaeho Shin, Ayoung Kim","doi":"10.1177/02783649231213117","DOIUrl":"https://doi.org/10.1177/02783649231213117","url":null,"abstract":"Transparent objects are encountered frequently in our daily lives, yet recognizing them poses challenges for conventional vision sensors due to their unique material properties, not being well perceived from RGB or depth cameras. Overcoming this limitation, thermal infrared cameras have emerged as a solution, offering improved visibility and shape information for transparent objects. In this paper, we present TRansPose, the first large-scale multispectral dataset that combines stereo RGB-D, thermal infrared (TIR) images, and object poses to promote transparent object research. The dataset includes 99 transparent objects, encompassing 43 household items, 27 recyclable trashes, 29 chemical laboratory equivalents, and 12 non-transparent objects. It comprises a vast collection of 333,819 images and 4,000,056 annotations, providing instance-level segmentation masks, ground-truth poses, and completed depth information. The data was acquired using an FLIR A65 thermal infrared camera, two Intel RealSense L515 RGB-D cameras, and a Franka Emika Panda robot manipulator. Spanning 87 sequences, TRansPose covers various challenging real-life scenarios, including objects filled with water, diverse lighting conditions, heavy clutter, non-transparent or translucent containers, objects in plastic bags, and multi-stacked objects. Supplementary material can be accessed from the following link: https://sites.google.com/view/transpose-dataset .","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135240935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trajectory generation and tracking control for aggressive tail-sitter flights 咄咄逼人尾坐飞行的轨迹生成与跟踪控制
1区 计算机科学 Q1 Mathematics Pub Date : 2023-11-07 DOI: 10.1177/02783649231207655
Guozheng Lu, Yixi Cai, Nan Chen, Fanze Kong, Yunfan Ren, Fu Zhang
We address the theoretical and practical problems related to the trajectory generation and tracking control of tail-sitter UAVs. Theoretically, we focus on the differential flatness property with full exploitation of actual UAV aerodynamic models, which lays a foundation for generating dynamically feasible trajectory and achieving high-performance tracking control. We have found that a tail-sitter is differentially flat with accurate (not simplified) aerodynamic models within the entire flight envelope, by specifying coordinate flight condition and choosing the vehicle position as the flat output. This fundamental property allows us to fully exploit the high-fidelity aerodynamic models in the trajectory planning and tracking control to achieve accurate tail-sitter flights. Particularly, an optimization-based trajectory planner for tail-sitters is proposed to design high-quality, smooth trajectories with consideration of kinodynamic constraints, singularity-free constraints, and actuator saturation. The planned trajectory of flat output is transformed into state trajectory in real time with optional consideration of wind in environments. To track the state trajectory, a global, singularity-free, and minimally parameterized on-manifold MPC is developed, which fully leverages the accurate aerodynamic model to achieve high-accuracy trajectory tracking within the whole flight envelope. The proposed algorithms are implemented on our quadrotor tail-sitter prototype, “Hong Hu,” and their effectiveness are demonstrated through extensive real-world experiments in both indoor and outdoor field tests, including agile SE(3) flight through consecutive narrow windows requiring specific attitude and with speed up to 10 m/s, typical tail-sitter maneuvers (transition, level flight, and loiter) with speed up to 20 m/s, and extremely aggressive aerobatic maneuvers (Wingover, Loop, Vertical Eight, and Cuban Eight) with acceleration up to 2.5 g. The video demonstration is available at https://youtu.be/2x_bLbVuyrk .
研究了尾翼无人机的轨迹生成与跟踪控制的理论与实践问题。从理论上讲,充分利用无人机的实际气动模型,重点研究微分平整度特性,为生成动态可行的轨迹和实现高性能的跟踪控制奠定基础。我们发现,通过指定坐标飞行条件和选择飞行器位置作为平坦输出,在整个飞行包线内精确(而不是简化)的空气动力学模型中,尾翼是差分平坦的。这一基本特性使我们能够在轨迹规划和跟踪控制中充分利用高保真的空气动力学模型来实现精确的尾坐飞行。特别地,提出了一种基于优化的尾座轨迹规划器,用于设计高质量、光滑的轨迹,同时考虑了动力学约束、无奇点约束和执行器饱和。将平面输出的规划轨迹实时转换为状态轨迹,并可选择考虑环境中的风。为了跟踪状态轨迹,开发了一种全局的、无奇点的、最小参数化的流形MPC,充分利用精确的气动模型在整个飞行包线内实现高精度的轨迹跟踪。所提出的算法在我们的四旋翼飞机原型“红虎”上实现,通过室内和室外现场测试的广泛现实世界实验证明了它们的有效性,包括敏捷的SE(3)飞行,通过连续的窄窗,需要特定的姿态和速度高达10米/秒,典型的尾坐机动(过渡,水平飞行和徘徊),速度高达20米/秒,以及极具侵略性的特技飞行机动(翻翼,环,垂直八,和古巴八)加速度高达2.5 g。视频演示可以在https://youtu.be/2x_bLbVuyrk上找到。
{"title":"Trajectory generation and tracking control for aggressive tail-sitter flights","authors":"Guozheng Lu, Yixi Cai, Nan Chen, Fanze Kong, Yunfan Ren, Fu Zhang","doi":"10.1177/02783649231207655","DOIUrl":"https://doi.org/10.1177/02783649231207655","url":null,"abstract":"We address the theoretical and practical problems related to the trajectory generation and tracking control of tail-sitter UAVs. Theoretically, we focus on the differential flatness property with full exploitation of actual UAV aerodynamic models, which lays a foundation for generating dynamically feasible trajectory and achieving high-performance tracking control. We have found that a tail-sitter is differentially flat with accurate (not simplified) aerodynamic models within the entire flight envelope, by specifying coordinate flight condition and choosing the vehicle position as the flat output. This fundamental property allows us to fully exploit the high-fidelity aerodynamic models in the trajectory planning and tracking control to achieve accurate tail-sitter flights. Particularly, an optimization-based trajectory planner for tail-sitters is proposed to design high-quality, smooth trajectories with consideration of kinodynamic constraints, singularity-free constraints, and actuator saturation. The planned trajectory of flat output is transformed into state trajectory in real time with optional consideration of wind in environments. To track the state trajectory, a global, singularity-free, and minimally parameterized on-manifold MPC is developed, which fully leverages the accurate aerodynamic model to achieve high-accuracy trajectory tracking within the whole flight envelope. The proposed algorithms are implemented on our quadrotor tail-sitter prototype, “Hong Hu,” and their effectiveness are demonstrated through extensive real-world experiments in both indoor and outdoor field tests, including agile SE(3) flight through consecutive narrow windows requiring specific attitude and with speed up to 10 m/s, typical tail-sitter maneuvers (transition, level flight, and loiter) with speed up to 20 m/s, and extremely aggressive aerobatic maneuvers (Wingover, Loop, Vertical Eight, and Cuban Eight) with acceleration up to 2.5 g. The video demonstration is available at https://youtu.be/2x_bLbVuyrk .","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135539835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal virtual tube planning and control for swarm robotics 群机器人的最优虚拟管规划与控制
1区 计算机科学 Q1 Mathematics Pub Date : 2023-11-07 DOI: 10.1177/02783649231210012
Pengda Mao, Rao Fu, Quan Quan
This paper presents a novel method for efficiently solving a trajectory planning problem for swarm robotics in cluttered environments. Recent research has demonstrated high success rates in real-time local trajectory planning for swarm robotics in cluttered environments, but optimizing trajectories for each robot is still computationally expensive, with a computational complexity from [Formula: see text] to [Formula: see text] where [Formula: see text] is the number of parameters in the parameterized trajectory, [Formula: see text] is precision, and [Formula: see text] is the number of iterations with respect to [Formula: see text] and [Formula: see text]. Furthermore, the swarm is difficult to move as a group. To address this issue, we define and then construct the optimal virtual tube, which includes infinite optimal trajectories. Under certain conditions, any optimal trajectory in the optimal virtual tube can be expressed as a convex combination of a finite number of optimal trajectories, with a computational complexity of [Formula: see text]. Afterward, a hierarchical approach including a planning method of the optimal virtual tube with minimizing energy and distributed model predictive control is proposed. In simulations and experiments, the proposed approach is validated and its effectiveness over other methods is demonstrated through comparison.
提出了一种有效解决混沌环境下群体机器人轨迹规划问题的新方法。最近的研究表明,在混乱环境下,群体机器人实时局部轨迹规划的成功率很高,但优化每个机器人的轨迹仍然是计算昂贵的,计算复杂度从[公式:见文]到[公式:见文],其中[公式:见文]是参数化轨迹中的参数数量,[公式:见文]是精度,[公式:见文]是相对于[公式:见文]的迭代次数。和[公式:见文本]。此外,蜂群很难作为一个群体移动。为了解决这一问题,我们定义并构造了包含无限个最优轨迹的最优虚拟管。在一定条件下,最优虚拟管内的任何最优轨迹都可以表示为有限个最优轨迹的凸组合,其计算复杂度为[公式:见文]。然后,提出了一种包含能量最小的最优虚拟管规划方法和分布式模型预测控制的分层方法。通过仿真和实验验证了该方法的有效性,并与其他方法进行了比较。
{"title":"Optimal virtual tube planning and control for swarm robotics","authors":"Pengda Mao, Rao Fu, Quan Quan","doi":"10.1177/02783649231210012","DOIUrl":"https://doi.org/10.1177/02783649231210012","url":null,"abstract":"This paper presents a novel method for efficiently solving a trajectory planning problem for swarm robotics in cluttered environments. Recent research has demonstrated high success rates in real-time local trajectory planning for swarm robotics in cluttered environments, but optimizing trajectories for each robot is still computationally expensive, with a computational complexity from [Formula: see text] to [Formula: see text] where [Formula: see text] is the number of parameters in the parameterized trajectory, [Formula: see text] is precision, and [Formula: see text] is the number of iterations with respect to [Formula: see text] and [Formula: see text]. Furthermore, the swarm is difficult to move as a group. To address this issue, we define and then construct the optimal virtual tube, which includes infinite optimal trajectories. Under certain conditions, any optimal trajectory in the optimal virtual tube can be expressed as a convex combination of a finite number of optimal trajectories, with a computational complexity of [Formula: see text]. Afterward, a hierarchical approach including a planning method of the optimal virtual tube with minimizing energy and distributed model predictive control is proposed. In simulations and experiments, the proposed approach is validated and its effectiveness over other methods is demonstrated through comparison.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135474810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exceeding traditional curvature limits of concentric tube robots through redundancy resolution 通过冗余度分辨率,突破了传统同心圆管机器人曲率极限
1区 计算机科学 Q1 Mathematics Pub Date : 2023-11-07 DOI: 10.1177/02783649231202548
Patrick L Anderson, Richard J Hendrick, Margaret F Rox, Robert J Webster
Understanding elastic instability has been a recent focus of concentric tube robot research. Modeling advances have enabled prediction of when instabilities will occur and produced metrics for the stability of the robot during use. In this paper, we show how these metrics can be used to resolve redundancy to avoid elastic instability, opening the door for the practical use of higher curvature designs than have previously been possible. We demonstrate the effectiveness of the approach using a three-tube robot that is stabilized by redundancy resolution when following trajectories that would otherwise result in elastic instabilities. We also show that it is stabilized when teleoperated in ways that otherwise produce elastic instabilities. Lastly, we show that the redundancy resolution framework presented here can be applied to other control objectives useful for surgical robots, such as maximizing or minimizing compliance in desired directions.
了解弹性不稳定性是近年来同心管机器人研究的热点。建模的进步已经能够预测何时会发生不稳定,并产生了机器人在使用过程中的稳定性指标。在本文中,我们展示了如何使用这些度量来解决冗余以避免弹性不稳定性,为比以前可能的更高曲率设计的实际使用打开了大门。我们使用三管机器人证明了该方法的有效性,该机器人在跟随轨迹时通过冗余分辨率稳定,否则会导致弹性不稳定。我们还表明,当远程操作时,它是稳定的,否则会产生弹性不稳定。最后,我们证明了这里提出的冗余解析框架可以应用于其他对手术机器人有用的控制目标,例如在期望方向上最大化或最小化顺应性。
{"title":"Exceeding traditional curvature limits of concentric tube robots through redundancy resolution","authors":"Patrick L Anderson, Richard J Hendrick, Margaret F Rox, Robert J Webster","doi":"10.1177/02783649231202548","DOIUrl":"https://doi.org/10.1177/02783649231202548","url":null,"abstract":"Understanding elastic instability has been a recent focus of concentric tube robot research. Modeling advances have enabled prediction of when instabilities will occur and produced metrics for the stability of the robot during use. In this paper, we show how these metrics can be used to resolve redundancy to avoid elastic instability, opening the door for the practical use of higher curvature designs than have previously been possible. We demonstrate the effectiveness of the approach using a three-tube robot that is stabilized by redundancy resolution when following trajectories that would otherwise result in elastic instabilities. We also show that it is stabilized when teleoperated in ways that otherwise produce elastic instabilities. Lastly, we show that the redundancy resolution framework presented here can be applied to other control objectives useful for surgical robots, such as maximizing or minimizing compliance in desired directions.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135474919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Certified polyhedral decompositions of collision-free configuration space 无碰撞位形空间的认证多面体分解
1区 计算机科学 Q1 Mathematics Pub Date : 2023-11-03 DOI: 10.1177/02783649231201437
Hongkai Dai, Alexandre Amice, Peter Werner, Annan Zhang, Russ Tedrake
Understanding the geometry of collision-free configuration space (C-free) in the presence of Cartesian-space obstacles is an essential ingredient for collision-free motion planning. While it is possible to check for collisions at a point using standard algorithms, to date no practical method exists for computing C-free regions with rigorous certificates due to the complexity of mapping Cartesian-space obstacles through the kinematics. In this work, we present the first to our knowledge rigorous method for approximately decomposing a rational parametrization of C-free into certified polyhedral regions. Our method, called C-Iris (C-space Iterative Regional Inflation by Semidefinite programming), generates large, convex polytopes in a rational parameterization of the configuration space which are rigorously certified to be collision-free. Such regions have been shown to be useful for both optimization-based and randomized motion planning. Based on convex optimization, our method works in arbitrary dimensions, only makes assumptions about the convexity of the obstacles in the 3D Cartesian space, and is fast enough to scale to realistic problems in manipulation. We demonstrate our algorithm’s ability to fill a non-trivial amount of collision-free C-space in several 2-DOF examples where the C-space can be visualized, as well as the scalability of our algorithm on a 7-DOF KUKA iiwa, a 6-DOF UR3e, and 12-DOF bimanual manipulators. An implementation of our algorithm is open-sourced in Drake . We furthermore provide examples of our algorithm in interactive Python notebooks .
在笛卡尔空间障碍物存在的情况下,了解无碰撞构型空间(C-free)的几何形状是无碰撞运动规划的重要组成部分。虽然可以使用标准算法来检查一点上的碰撞,但由于通过运动学映射笛卡尔空间障碍物的复杂性,迄今为止还没有实用的方法来计算具有严格证书的C-free区域。在这项工作中,我们提出了我们所知的第一个严格的方法来近似分解C-free的合理参数化为认证的多面体区域。我们的方法,称为C-Iris (C-space Iterative Regional Inflation by半确定规划),在构型空间的合理参数化中生成大型凸多面体,并被严格证明为无碰撞。这些区域已被证明对基于优化和随机运动规划都是有用的。该方法基于凸优化,适用于任意维度,仅对三维笛卡尔空间中障碍物的凹凸性进行假设,并且速度足够快,可以扩展到实际操作问题。我们在几个2自由度的例子中展示了我们的算法填充大量无碰撞c空间的能力,其中c空间可以可视化,以及我们的算法在7自由度KUKA iiwa, 6自由度UR3e和12自由度手动机械手上的可扩展性。我们算法的实现在Drake中是开源的。我们还在交互式Python笔记本中提供了我们算法的示例。
{"title":"Certified polyhedral decompositions of collision-free configuration space","authors":"Hongkai Dai, Alexandre Amice, Peter Werner, Annan Zhang, Russ Tedrake","doi":"10.1177/02783649231201437","DOIUrl":"https://doi.org/10.1177/02783649231201437","url":null,"abstract":"Understanding the geometry of collision-free configuration space (C-free) in the presence of Cartesian-space obstacles is an essential ingredient for collision-free motion planning. While it is possible to check for collisions at a point using standard algorithms, to date no practical method exists for computing C-free regions with rigorous certificates due to the complexity of mapping Cartesian-space obstacles through the kinematics. In this work, we present the first to our knowledge rigorous method for approximately decomposing a rational parametrization of C-free into certified polyhedral regions. Our method, called C-Iris (C-space Iterative Regional Inflation by Semidefinite programming), generates large, convex polytopes in a rational parameterization of the configuration space which are rigorously certified to be collision-free. Such regions have been shown to be useful for both optimization-based and randomized motion planning. Based on convex optimization, our method works in arbitrary dimensions, only makes assumptions about the convexity of the obstacles in the 3D Cartesian space, and is fast enough to scale to realistic problems in manipulation. We demonstrate our algorithm’s ability to fill a non-trivial amount of collision-free C-space in several 2-DOF examples where the C-space can be visualized, as well as the scalability of our algorithm on a 7-DOF KUKA iiwa, a 6-DOF UR3e, and 12-DOF bimanual manipulators. An implementation of our algorithm is open-sourced in Drake . We furthermore provide examples of our algorithm in interactive Python notebooks .","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135873889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nonverbal social behavior generation for social robots using end-to-end learning 基于端到端学习的社交机器人非语言社交行为生成
1区 计算机科学 Q1 Mathematics Pub Date : 2023-11-02 DOI: 10.1177/02783649231207974
Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jaehong Kim
Social robots facilitate improved human–robot interactions through nonverbal behaviors such as handshakes or hugs. However, the traditional methods, which rely on precoded motions, are predictable and can detract from the perception of robots as interactive agents. To address this issue, we have introduced a Seq2Seq-based neural network model that learns social behaviors from human–human interactions in an end-to-end manner. To mitigate the risk of invalid pose sequences during long-term behavior generation, we incorporated a generative adversarial network (GAN). This proposed method was tested using the humanoid robot, Pepper, in a simulated environment. Given the challenges in assessing the success of social behavior generation, we devised novel metrics to quantify the discrepancy between the generated and ground-truth behaviors. Our analysis reveals the impact of different networks on behavior generation performance and compares the efficacy of learning multiple behaviors versus a single behavior. We anticipate that our method will find application in various sectors, including home service, guide, delivery, educational, and virtual robots, thereby enhancing user interaction and enjoyment.
社交机器人通过握手或拥抱等非语言行为促进了人与人之间的互动。然而,传统的方法依赖于预编码的运动,是可预测的,并且可能会削弱机器人作为交互式代理的感知。为了解决这个问题,我们引入了一个基于seq2seq的神经网络模型,该模型以端到端的方式从人与人之间的互动中学习社会行为。为了降低长期行为生成过程中无效姿势序列的风险,我们采用了生成对抗网络(GAN)。在模拟环境中使用人形机器人Pepper对该方法进行了测试。考虑到评估社会行为生成成功的挑战,我们设计了新的指标来量化生成的行为与真实行为之间的差异。我们的分析揭示了不同网络对行为生成性能的影响,并比较了学习多种行为与学习单一行为的效果。我们预计我们的方法将在各个领域得到应用,包括家庭服务、导游、送货、教育和虚拟机器人,从而增强用户的互动和享受。
{"title":"Nonverbal social behavior generation for social robots using end-to-end learning","authors":"Woo-Ri Ko, Minsu Jang, Jaeyeon Lee, Jaehong Kim","doi":"10.1177/02783649231207974","DOIUrl":"https://doi.org/10.1177/02783649231207974","url":null,"abstract":"Social robots facilitate improved human–robot interactions through nonverbal behaviors such as handshakes or hugs. However, the traditional methods, which rely on precoded motions, are predictable and can detract from the perception of robots as interactive agents. To address this issue, we have introduced a Seq2Seq-based neural network model that learns social behaviors from human–human interactions in an end-to-end manner. To mitigate the risk of invalid pose sequences during long-term behavior generation, we incorporated a generative adversarial network (GAN). This proposed method was tested using the humanoid robot, Pepper, in a simulated environment. Given the challenges in assessing the success of social behavior generation, we devised novel metrics to quantify the discrepancy between the generated and ground-truth behaviors. Our analysis reveals the impact of different networks on behavior generation performance and compares the efficacy of learning multiple behaviors versus a single behavior. We anticipate that our method will find application in various sectors, including home service, guide, delivery, educational, and virtual robots, thereby enhancing user interaction and enjoyment.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135875571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Robotics Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1