We propose AstroSLAM, a standalone vision-based solution for autonomous online navigation around an unknown celestial target small body. AstroSLAM is predicated on the formulation of the SLAM problem as an incrementally growing factor graph, facilitated by the use of the GTSAM library and the iSAM2 engine. By combining sensor fusion with orbital motion priors, we achieve improved performance over a baseline SLAM solution and outperform state-of-the-art methods predicated on pre-integrated inertial measurement unit factors. We incorporate orbital motion constraints into the factor graph by devising a novel relative dynamics—RelDyn—factor, which links the relative pose of the spacecraft to the problem of predicting trajectories stemming from the motion of the spacecraft in the vicinity of the small body. We demonstrate AstroSLAM’s performance and compare against the state-of-the-art methods using both real legacy mission imagery and trajectory data courtesy of NASA’s Planetary Data System, as well as real in-lab imagery data produced on a 3 degree-of-freedom spacecraft simulator test-bed.
我们提出的 AstroSLAM 是一种基于视觉的独立解决方案,用于围绕未知天体目标小体进行自主在线导航。AstroSLAM 的前提是将 SLAM 问题表述为一个增量增长的因子图,而 GTSAM 库和 iSAM2 引擎的使用则为这一表述提供了便利。通过将传感器融合与轨道运动先验相结合,我们实现了比基线 SLAM 解决方案更高的性能,并超越了基于预集成惯性测量单元因子的最先进方法。我们通过设计一种新颖的相对动力学因子(RelDyn-factor)将轨道运动约束条件纳入因子图,该因子将航天器的相对姿态与预测航天器在小天体附近的运动轨迹问题联系起来。我们展示了 AstroSLAM 的性能,并使用美国国家航空航天局行星数据系统提供的真实遗留飞行任务图像和轨迹数据,以及在 3 自由度航天器模拟器测试平台上生成的真实实验室内图像数据,与最先进的方法进行了比较。
{"title":"AstroSLAM: Autonomous monocular navigation in the vicinity of a celestial small body—Theory and experiments","authors":"Mehregan Dor, Travis Driver, Kenneth Getzandanner, Panagiotis Tsiotras","doi":"10.1177/02783649241234367","DOIUrl":"https://doi.org/10.1177/02783649241234367","url":null,"abstract":"We propose AstroSLAM, a standalone vision-based solution for autonomous online navigation around an unknown celestial target small body. AstroSLAM is predicated on the formulation of the SLAM problem as an incrementally growing factor graph, facilitated by the use of the GTSAM library and the iSAM2 engine. By combining sensor fusion with orbital motion priors, we achieve improved performance over a baseline SLAM solution and outperform state-of-the-art methods predicated on pre-integrated inertial measurement unit factors. We incorporate orbital motion constraints into the factor graph by devising a novel relative dynamics—RelDyn—factor, which links the relative pose of the spacecraft to the problem of predicting trajectories stemming from the motion of the spacecraft in the vicinity of the small body. We demonstrate AstroSLAM’s performance and compare against the state-of-the-art methods using both real legacy mission imagery and trajectory data courtesy of NASA’s Planetary Data System, as well as real in-lab imagery data produced on a 3 degree-of-freedom spacecraft simulator test-bed.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141511004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-21DOI: 10.1177/02783649241259138
Ning Tan, Peng Yu, Xin Wang, Kai Huang
Continuum robots have great application value and broad prospects in various fields due to their dexterity and compliance. To fully exploit their advantages, it is crucial to develop an effective, accurate and robust control system for them. However, research on continuum robot control is still in its infancy and there are many problems remaining unsolved in this field. In particular, this paper focuses on the task-space behavior and the generic control of heterogeneous continuum robots. First, a controller is proposed to achieve the kinematic motion control and visual servoing of continuum robots with predefined task-space behavior. The predefined behavior is twofold: prescribed task-space error and predefined convergence time. Then, the proposed controller is integrated with a velocity-level kinematic mapping estimator to obtain a model-free control system, which is applicable to heterogeneous continuum robots. Furthermore, a re-adjustable performance function is proposed to ensure the effectiveness and robustness of the proposed control system in the presence of external disturbance. Finally, extensive simulations and experiments are performed based on heterogeneous continuum robots, including the cable-driven continuum robot, the parallel continuum robot, the concentric-tube robot, the flexible endoscope, and the pneumatic continuum robot. Our results demonstrate that the task-space error of heterogeneous continuum robots complies with the prescribed boundaries and converges to steady state in predefined time, which reveals the efficacy of the proposed control method.
{"title":"Behavior-predefined adaptive control for heterogeneous continuum robots","authors":"Ning Tan, Peng Yu, Xin Wang, Kai Huang","doi":"10.1177/02783649241259138","DOIUrl":"https://doi.org/10.1177/02783649241259138","url":null,"abstract":"Continuum robots have great application value and broad prospects in various fields due to their dexterity and compliance. To fully exploit their advantages, it is crucial to develop an effective, accurate and robust control system for them. However, research on continuum robot control is still in its infancy and there are many problems remaining unsolved in this field. In particular, this paper focuses on the task-space behavior and the generic control of heterogeneous continuum robots. First, a controller is proposed to achieve the kinematic motion control and visual servoing of continuum robots with predefined task-space behavior. The predefined behavior is twofold: prescribed task-space error and predefined convergence time. Then, the proposed controller is integrated with a velocity-level kinematic mapping estimator to obtain a model-free control system, which is applicable to heterogeneous continuum robots. Furthermore, a re-adjustable performance function is proposed to ensure the effectiveness and robustness of the proposed control system in the presence of external disturbance. Finally, extensive simulations and experiments are performed based on heterogeneous continuum robots, including the cable-driven continuum robot, the parallel continuum robot, the concentric-tube robot, the flexible endoscope, and the pneumatic continuum robot. Our results demonstrate that the task-space error of heterogeneous continuum robots complies with the prescribed boundaries and converges to steady state in predefined time, which reveals the efficacy of the proposed control method.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"2018 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141511005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-21DOI: 10.1177/02783649241258782
Moses C. Nah, Johannes Lachner, Neville Hogan
Motor primitives are fundamental building blocks of a controller which enable dynamic robot behavior with minimal high-level intervention. By treating motor primitives as basic “modules,” different modules can be sequenced or superimposed to generate a rich repertoire of motor behavior. In robotics, two distinct approaches have been proposed: Dynamic Movement Primitives (DMPs) and Elementary Dynamic Actions (EDAs). While both approaches instantiate similar ideas, significant differences also exist. This paper attempts to clarify the distinction and provide a unifying view by delineating the similarities and differences between DMPs and EDAs. We provide nine robot control examples, including sequencing or superimposing movements, managing kinematic redundancy and singularity, control of both position and orientation of the robot’s end-effector, obstacle avoidance, and managing physical interaction. We show that the two approaches clearly diverge in their implementation. We also provide a real-robot demonstration to show how DMPs and EDAs can be combined to get the best of both approaches. With this detailed comparison, we enable researchers to make informed decisions to select the most suitable approach for specific robot tasks and applications.
{"title":"Robot control based on motor primitives: A comparison of two approaches","authors":"Moses C. Nah, Johannes Lachner, Neville Hogan","doi":"10.1177/02783649241258782","DOIUrl":"https://doi.org/10.1177/02783649241258782","url":null,"abstract":"Motor primitives are fundamental building blocks of a controller which enable dynamic robot behavior with minimal high-level intervention. By treating motor primitives as basic “modules,” different modules can be sequenced or superimposed to generate a rich repertoire of motor behavior. In robotics, two distinct approaches have been proposed: Dynamic Movement Primitives (DMPs) and Elementary Dynamic Actions (EDAs). While both approaches instantiate similar ideas, significant differences also exist. This paper attempts to clarify the distinction and provide a unifying view by delineating the similarities and differences between DMPs and EDAs. We provide nine robot control examples, including sequencing or superimposing movements, managing kinematic redundancy and singularity, control of both position and orientation of the robot’s end-effector, obstacle avoidance, and managing physical interaction. We show that the two approaches clearly diverge in their implementation. We also provide a real-robot demonstration to show how DMPs and EDAs can be combined to get the best of both approaches. With this detailed comparison, we enable researchers to make informed decisions to select the most suitable approach for specific robot tasks and applications.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141530154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-14DOI: 10.1177/02783649241256044
Yannik P. Wotte, Federico Califano, Stefano Stramigioli
This work presents a novel approach for the optimization of dynamic systems on finite-dimensional Lie groups. We rephrase dynamic systems as so-called neural ordinary differential equations (neural ODEs), and formulate the optimization problem on Lie groups. A gradient descent optimization algorithm is presented to tackle the optimization numerically. Our algorithm is scalable, and applicable to any finite-dimensional Lie group, including matrix Lie groups. By representing the system at the Lie algebra level, we reduce the computational cost of the gradient computation. In an extensive example, optimal potential energy shaping for control of a rigid body is treated. The optimal control problem is phrased as an optimization of a neural ODE on the Lie group SE(3), and the controller is iteratively optimized. The final controller is validated on a state-regulation task.
{"title":"Optimal potential shaping on SE(3) via neural ordinary differential equations on Lie groups","authors":"Yannik P. Wotte, Federico Califano, Stefano Stramigioli","doi":"10.1177/02783649241256044","DOIUrl":"https://doi.org/10.1177/02783649241256044","url":null,"abstract":"This work presents a novel approach for the optimization of dynamic systems on finite-dimensional Lie groups. We rephrase dynamic systems as so-called neural ordinary differential equations (neural ODEs), and formulate the optimization problem on Lie groups. A gradient descent optimization algorithm is presented to tackle the optimization numerically. Our algorithm is scalable, and applicable to any finite-dimensional Lie group, including matrix Lie groups. By representing the system at the Lie algebra level, we reduce the computational cost of the gradient computation. In an extensive example, optimal potential energy shaping for control of a rigid body is treated. The optimal control problem is phrased as an optimization of a neural ODE on the Lie group SE(3), and the controller is iteratively optimized. The final controller is validated on a state-regulation task.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"4 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141341068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-10DOI: 10.1177/02783649241257537
Cheol-Hui Min, Young Min Kim
Vision-based reinforcement learning (RL) is a generalizable way to control an agent because it is agnostic of specific hardware configurations. As visual observations are highly entangled, attempts for vision-based RL rely on scene representation that discerns individual entities and establishes intuitive physics to constitute the world model. However, most existing works on scene representation learning cannot successfully be deployed to train an RL agent, as they are often highly unstable and fail to sustain for a long enough temporal horizon. We propose ASIMO, a fully unsupervised scene decomposition to perform interaction-rich tasks with a vision-based RL agent. ASIMO decomposes agent-object interaction videos of episodic-length into the agent, objects, and background, predicting their long-term interactions. Further, we explicitly model possible occlusion in the image observations and stably track individual objects. Then, we can correctly deduce the updated positions of individual entities in response to the agent action, only from partial visual observation. Based on the stable entity-wise decomposition and temporal prediction, we formulate a hierarchical framework to train the RL agent that focuses on the context around the object of interest. We demonstrate that our formulation for scene representation can be universally deployed to train different configurations of agents and accomplish several tasks that involve pushing, arranging, and placing multiple rigid objects.
{"title":"ASIMO: Agent-centric scene representation in multi-object manipulation","authors":"Cheol-Hui Min, Young Min Kim","doi":"10.1177/02783649241257537","DOIUrl":"https://doi.org/10.1177/02783649241257537","url":null,"abstract":"Vision-based reinforcement learning (RL) is a generalizable way to control an agent because it is agnostic of specific hardware configurations. As visual observations are highly entangled, attempts for vision-based RL rely on scene representation that discerns individual entities and establishes intuitive physics to constitute the world model. However, most existing works on scene representation learning cannot successfully be deployed to train an RL agent, as they are often highly unstable and fail to sustain for a long enough temporal horizon. We propose ASIMO, a fully unsupervised scene decomposition to perform interaction-rich tasks with a vision-based RL agent. ASIMO decomposes agent-object interaction videos of episodic-length into the agent, objects, and background, predicting their long-term interactions. Further, we explicitly model possible occlusion in the image observations and stably track individual objects. Then, we can correctly deduce the updated positions of individual entities in response to the agent action, only from partial visual observation. Based on the stable entity-wise decomposition and temporal prediction, we formulate a hierarchical framework to train the RL agent that focuses on the context around the object of interest. We demonstrate that our formulation for scene representation can be universally deployed to train different configurations of agents and accomplish several tasks that involve pushing, arranging, and placing multiple rigid objects.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":" 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141365002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-25DOI: 10.1177/02783649241236860
Mathew Halm, Michael Posa
Robotic manipulation and locomotion often entail nearly-simultaneous collisions—such as heel and toe strikes during a foot step—with outcomes that are extremely sensitive to the order in which impacts occur. Robotic simulators and state estimation commonly lack the fidelity and accuracy to predict this ordering, and instead pick one with a heuristic. This discrepancy degrades performance when model-based controllers and policies learned in simulation are placed on a real robot. We reconcile this issue with a set-valued rigid-body model which generates a broad set of outcomes to simultaneous frictional impacts with any impact ordering. We first extend Routh’s impact model to multiple impacts by reformulating it as a differential inclusion (DI), and show that any solution will resolve all impacts in finite time. By considering time as a state, we embed this model into another DI which captures the continuous-time evolution of rigid-body dynamics, and guarantee existence of solutions. We finally cast simulation of simultaneous impacts as a linear complementarity problem (LCP), and develop an algorithm for tight approximation of the post-impact velocity set with probabilistic guarantees. We demonstrate our approach on several examples drawn from manipulation and legged locomotion, and compare the predictions to other models of rigid and compliant collisions.
机器人操纵和运动通常需要几乎同时发生碰撞,例如脚步过程中脚跟和脚趾的撞击,其结果对撞击发生的顺序极为敏感。机器人模拟器和状态估计通常缺乏预测这种顺序的保真度和准确性,而是采用启发式方法来选择一种顺序。当把在模拟中学习到的基于模型的控制器和策略应用到真实机器人上时,这种差异会降低性能。我们采用了一个集合值刚体模型来解决这个问题,该模型可对任意冲击顺序的同时摩擦冲击产生一系列广泛的结果。我们首先将 Routh 的撞击模型扩展到多重撞击,将其重新表述为微分包容(DI),并证明任何解决方案都能在有限时间内解决所有撞击。通过将时间视为一种状态,我们将该模型嵌入到另一个 DI 中,从而捕捉到刚体动力学的连续时间演化,并保证解的存在性。最后,我们将同时撞击的模拟视为线性互补问题(LCP),并开发了一种算法,以概率保证对撞击后速度集进行严格逼近。我们在操纵和腿部运动的几个例子中演示了我们的方法,并将预测结果与其他刚性和柔性碰撞模型进行了比较。
{"title":"Set-valued rigid-body dynamics for simultaneous, inelastic, frictional impacts","authors":"Mathew Halm, Michael Posa","doi":"10.1177/02783649241236860","DOIUrl":"https://doi.org/10.1177/02783649241236860","url":null,"abstract":"Robotic manipulation and locomotion often entail nearly-simultaneous collisions—such as heel and toe strikes during a foot step—with outcomes that are extremely sensitive to the order in which impacts occur. Robotic simulators and state estimation commonly lack the fidelity and accuracy to predict this ordering, and instead pick one with a heuristic. This discrepancy degrades performance when model-based controllers and policies learned in simulation are placed on a real robot. We reconcile this issue with a set-valued rigid-body model which generates a broad set of outcomes to simultaneous frictional impacts with any impact ordering. We first extend Routh’s impact model to multiple impacts by reformulating it as a differential inclusion (DI), and show that any solution will resolve all impacts in finite time. By considering time as a state, we embed this model into another DI which captures the continuous-time evolution of rigid-body dynamics, and guarantee existence of solutions. We finally cast simulation of simultaneous impacts as a linear complementarity problem (LCP), and develop an algorithm for tight approximation of the post-impact velocity set with probabilistic guarantees. We demonstrate our approach on several examples drawn from manipulation and legged locomotion, and compare the predictions to other models of rigid and compliant collisions.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141151467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The generation of smoother and shorter spiral complete coverage paths in multi-connected domains is a crucial research topic in path planning for robotic cavity machining and other related fields. Traditional methods for spiral path planning in multi-connected domains typically incorporate a subregion division procedure that leads to excessive subregion bridging, requiring longer, more sharply turning, and unevenly spaced spirals to achieve complete coverage. To address this issue, this paper proposes a novel spiral complete coverage path planning method using conformal slit mapping. It takes advantage of the fact that conformal slit mapping can transform multi-connected domains into regular disks or annuluses without the need for subregion division. Firstly, a slit mapping calculation technique is proposed for segmented cubic spline boundaries with corners. Secondly, a spiral path spacing control method is developed based on the maximum inscribed circle radius between adjacent conformal slit mapping iso-parameters. Thirdly, the spiral coverage path is derived by offsetting iso-parameters. Numerical experiments indicate that our method shares a comparable order-of-magnitude in computation time with the traditional PDE-based spiral complete coverage path method, but it excels in optimizing total path length, smoothness, and spacing consistency. Finally, we performed experiments on cavity milling and dry runs to compare the new method with the traditional PDE-based method in terms of machining duration and steering impact, respectively. The comparison reveals that, with both algorithms achieving complete coverage, the new algorithm reduces machining time and steering impact by 12.34% and 22.78%, respectively, compared with the traditional PDE-based method.
{"title":"Spiral complete coverage path planning based on conformal slit mapping in multi-connected domains","authors":"Changqing Shen, Sihao Mao, Bingzhou Xu, Ziwei Wang, Xiaojian Zhang, Sijie Yan, Han Ding","doi":"10.1177/02783649241251385","DOIUrl":"https://doi.org/10.1177/02783649241251385","url":null,"abstract":"The generation of smoother and shorter spiral complete coverage paths in multi-connected domains is a crucial research topic in path planning for robotic cavity machining and other related fields. Traditional methods for spiral path planning in multi-connected domains typically incorporate a subregion division procedure that leads to excessive subregion bridging, requiring longer, more sharply turning, and unevenly spaced spirals to achieve complete coverage. To address this issue, this paper proposes a novel spiral complete coverage path planning method using conformal slit mapping. It takes advantage of the fact that conformal slit mapping can transform multi-connected domains into regular disks or annuluses without the need for subregion division. Firstly, a slit mapping calculation technique is proposed for segmented cubic spline boundaries with corners. Secondly, a spiral path spacing control method is developed based on the maximum inscribed circle radius between adjacent conformal slit mapping iso-parameters. Thirdly, the spiral coverage path is derived by offsetting iso-parameters. Numerical experiments indicate that our method shares a comparable order-of-magnitude in computation time with the traditional PDE-based spiral complete coverage path method, but it excels in optimizing total path length, smoothness, and spacing consistency. Finally, we performed experiments on cavity milling and dry runs to compare the new method with the traditional PDE-based method in terms of machining duration and steering impact, respectively. The comparison reveals that, with both algorithms achieving complete coverage, the new algorithm reduces machining time and steering impact by 12.34% and 22.78%, respectively, compared with the traditional PDE-based method.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"62 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140932321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-29DOI: 10.1177/02783649241247285
Gustavo A. Cardona, Cristian-Ioan Vasile
This paper presents a comprehensive approach for planning for teams of heterogeneous robots with different capabilities and the transportation of resources. We use Capability Temporal Logic (CaTL), a formal language that helps express tasks involving robots with multiple capabilities with spatial, temporal, and logical constraints. We extend CaTL to also capture resource constraints, where resources can be divisible and indivisible, for instance, sand and bricks, respectively. Robots transport resources using various storage types, such as uniform (shared storage among resources) and compartmental (individual storage per resource). Robots’ resource transportation capacity is defined based on resource type and robot class. Robot and resource dynamics and the CaTL mission are jointly encoded in a Mixed Integer Linear Programming (MILP), which maximizes disjoint robot and resource robustness while minimizing spurious movement of both. We propose a multi-robustness approach for Multi-Class Signal Temporal Logic (mcSTL), allowing for generalized quantitative semantics across multiple predicate classes. Thus, we compute availability robustness scores for robots and resources separately. Finally, we conduct multiple experiments demonstrating functionality and time performance by varying resources and storage types.
{"title":"Planning for heterogeneous teams of robots with temporal logic, capability, and resource constraints","authors":"Gustavo A. Cardona, Cristian-Ioan Vasile","doi":"10.1177/02783649241247285","DOIUrl":"https://doi.org/10.1177/02783649241247285","url":null,"abstract":"This paper presents a comprehensive approach for planning for teams of heterogeneous robots with different capabilities and the transportation of resources. We use Capability Temporal Logic (CaTL), a formal language that helps express tasks involving robots with multiple capabilities with spatial, temporal, and logical constraints. We extend CaTL to also capture resource constraints, where resources can be divisible and indivisible, for instance, sand and bricks, respectively. Robots transport resources using various storage types, such as uniform (shared storage among resources) and compartmental (individual storage per resource). Robots’ resource transportation capacity is defined based on resource type and robot class. Robot and resource dynamics and the CaTL mission are jointly encoded in a Mixed Integer Linear Programming (MILP), which maximizes disjoint robot and resource robustness while minimizing spurious movement of both. We propose a multi-robustness approach for Multi-Class Signal Temporal Logic (mcSTL), allowing for generalized quantitative semantics across multiple predicate classes. Thus, we compute availability robustness scores for robots and resources separately. Finally, we conduct multiple experiments demonstrating functionality and time performance by varying resources and storage types.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140832008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-23DOI: 10.1177/02783649241245729
Panagiotis Rousseas, Charalampos P. Bechlioulis, Kostas Kyriakopoulos
In this paper, a novel optimal motion planning framework that enables navigating optimally from any initial, to any final position within confined workspaces with convex, moving obstacles is presented. Our method outputs a smooth velocity vector field, which is then employed as a reference controller in order to sub-optimally avoid moving obstacles. The proposed approach leverages and extends desirable properties of reactive methods in order to provide a provably convergent and safe solution. Our algorithm is evaluated with both static and moving obstacles in synthetic environments and is compared against a variety of existing methods. The efficacy and applicability of the proposed scheme is finally validated in a high-fidelity simulation environment.
{"title":"Reactive optimal motion planning to anywhere in the presence of moving obstacles","authors":"Panagiotis Rousseas, Charalampos P. Bechlioulis, Kostas Kyriakopoulos","doi":"10.1177/02783649241245729","DOIUrl":"https://doi.org/10.1177/02783649241245729","url":null,"abstract":"In this paper, a novel optimal motion planning framework that enables navigating optimally from any initial, to any final position within confined workspaces with convex, moving obstacles is presented. Our method outputs a smooth velocity vector field, which is then employed as a reference controller in order to sub-optimally avoid moving obstacles. The proposed approach leverages and extends desirable properties of reactive methods in order to provide a provably convergent and safe solution. Our algorithm is evaluated with both static and moving obstacles in synthetic environments and is compared against a variety of existing methods. The efficacy and applicability of the proposed scheme is finally validated in a high-fidelity simulation environment.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"30 21","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140672081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-18DOI: 10.1177/02783649241229095
Kevin M. Judd, Jonathan D. Gammell
Visual motion estimation is a well-studied challenge in autonomous navigation. Recent work has focused on addressing multimotion estimation in highly dynamic environments. These environments not only comprise multiple, complex motions but also tend to exhibit significant occlusion. Estimating third-party motions simultaneously with the sensor egomotion is difficult because an object’s observed motion consists of both its true motion and the sensor motion. Most previous works in multimotion estimation simplify this problem by relying on appearance-based object detection or application-specific motion constraints. These approaches are effective in specific applications and environments but do not generalize well to the full multimotion estimation problem (MEP). This paper presents Multimotion Visual Odometry (MVO), a multimotion estimation pipeline that estimates the full SE(3) trajectory of every motion in the scene, including the sensor egomotion, without relying on appearance-based information. MVO extends the traditional visual odometry (VO) pipeline with multimotion segmentation and tracking techniques. It uses physically founded motion priors to extrapolate motions through temporary occlusions and identify the reappearance of motions through motion closure. Evaluations on real-world data from the Oxford Multimotion Dataset (OMD) and the KITTI Vision Benchmark Suite demonstrate that MVO achieves good estimation accuracy compared to similar approaches and is applicable to a variety of multimotion estimation challenges.
{"title":"Multimotion visual odometry","authors":"Kevin M. Judd, Jonathan D. Gammell","doi":"10.1177/02783649241229095","DOIUrl":"https://doi.org/10.1177/02783649241229095","url":null,"abstract":"Visual motion estimation is a well-studied challenge in autonomous navigation. Recent work has focused on addressing multimotion estimation in highly dynamic environments. These environments not only comprise multiple, complex motions but also tend to exhibit significant occlusion. Estimating third-party motions simultaneously with the sensor egomotion is difficult because an object’s observed motion consists of both its true motion and the sensor motion. Most previous works in multimotion estimation simplify this problem by relying on appearance-based object detection or application-specific motion constraints. These approaches are effective in specific applications and environments but do not generalize well to the full multimotion estimation problem (MEP). This paper presents Multimotion Visual Odometry (MVO), a multimotion estimation pipeline that estimates the full SE(3) trajectory of every motion in the scene, including the sensor egomotion, without relying on appearance-based information. MVO extends the traditional visual odometry (VO) pipeline with multimotion segmentation and tracking techniques. It uses physically founded motion priors to extrapolate motions through temporary occlusions and identify the reappearance of motions through motion closure. Evaluations on real-world data from the Oxford Multimotion Dataset (OMD) and the KITTI Vision Benchmark Suite demonstrate that MVO achieves good estimation accuracy compared to similar approaches and is applicable to a variety of multimotion estimation challenges.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140623537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}