首页 > 最新文献

The International Journal of Robotics Research最新文献

英文 中文
AstroSLAM: Autonomous monocular navigation in the vicinity of a celestial small body—Theory and experiments AstroSLAM:小天体附近的自主单目导航--理论与实验
Pub Date : 2024-06-21 DOI: 10.1177/02783649241234367
Mehregan Dor, Travis Driver, Kenneth Getzandanner, Panagiotis Tsiotras
We propose AstroSLAM, a standalone vision-based solution for autonomous online navigation around an unknown celestial target small body. AstroSLAM is predicated on the formulation of the SLAM problem as an incrementally growing factor graph, facilitated by the use of the GTSAM library and the iSAM2 engine. By combining sensor fusion with orbital motion priors, we achieve improved performance over a baseline SLAM solution and outperform state-of-the-art methods predicated on pre-integrated inertial measurement unit factors. We incorporate orbital motion constraints into the factor graph by devising a novel relative dynamics—RelDyn—factor, which links the relative pose of the spacecraft to the problem of predicting trajectories stemming from the motion of the spacecraft in the vicinity of the small body. We demonstrate AstroSLAM’s performance and compare against the state-of-the-art methods using both real legacy mission imagery and trajectory data courtesy of NASA’s Planetary Data System, as well as real in-lab imagery data produced on a 3 degree-of-freedom spacecraft simulator test-bed.
我们提出的 AstroSLAM 是一种基于视觉的独立解决方案,用于围绕未知天体目标小体进行自主在线导航。AstroSLAM 的前提是将 SLAM 问题表述为一个增量增长的因子图,而 GTSAM 库和 iSAM2 引擎的使用则为这一表述提供了便利。通过将传感器融合与轨道运动先验相结合,我们实现了比基线 SLAM 解决方案更高的性能,并超越了基于预集成惯性测量单元因子的最先进方法。我们通过设计一种新颖的相对动力学因子(RelDyn-factor)将轨道运动约束条件纳入因子图,该因子将航天器的相对姿态与预测航天器在小天体附近的运动轨迹问题联系起来。我们展示了 AstroSLAM 的性能,并使用美国国家航空航天局行星数据系统提供的真实遗留飞行任务图像和轨迹数据,以及在 3 自由度航天器模拟器测试平台上生成的真实实验室内图像数据,与最先进的方法进行了比较。
{"title":"AstroSLAM: Autonomous monocular navigation in the vicinity of a celestial small body—Theory and experiments","authors":"Mehregan Dor, Travis Driver, Kenneth Getzandanner, Panagiotis Tsiotras","doi":"10.1177/02783649241234367","DOIUrl":"https://doi.org/10.1177/02783649241234367","url":null,"abstract":"We propose AstroSLAM, a standalone vision-based solution for autonomous online navigation around an unknown celestial target small body. AstroSLAM is predicated on the formulation of the SLAM problem as an incrementally growing factor graph, facilitated by the use of the GTSAM library and the iSAM2 engine. By combining sensor fusion with orbital motion priors, we achieve improved performance over a baseline SLAM solution and outperform state-of-the-art methods predicated on pre-integrated inertial measurement unit factors. We incorporate orbital motion constraints into the factor graph by devising a novel relative dynamics—RelDyn—factor, which links the relative pose of the spacecraft to the problem of predicting trajectories stemming from the motion of the spacecraft in the vicinity of the small body. We demonstrate AstroSLAM’s performance and compare against the state-of-the-art methods using both real legacy mission imagery and trajectory data courtesy of NASA’s Planetary Data System, as well as real in-lab imagery data produced on a 3 degree-of-freedom spacecraft simulator test-bed.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141511004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Behavior-predefined adaptive control for heterogeneous continuum robots 异构连续机器人的行为预定义自适应控制
Pub Date : 2024-06-21 DOI: 10.1177/02783649241259138
Ning Tan, Peng Yu, Xin Wang, Kai Huang
Continuum robots have great application value and broad prospects in various fields due to their dexterity and compliance. To fully exploit their advantages, it is crucial to develop an effective, accurate and robust control system for them. However, research on continuum robot control is still in its infancy and there are many problems remaining unsolved in this field. In particular, this paper focuses on the task-space behavior and the generic control of heterogeneous continuum robots. First, a controller is proposed to achieve the kinematic motion control and visual servoing of continuum robots with predefined task-space behavior. The predefined behavior is twofold: prescribed task-space error and predefined convergence time. Then, the proposed controller is integrated with a velocity-level kinematic mapping estimator to obtain a model-free control system, which is applicable to heterogeneous continuum robots. Furthermore, a re-adjustable performance function is proposed to ensure the effectiveness and robustness of the proposed control system in the presence of external disturbance. Finally, extensive simulations and experiments are performed based on heterogeneous continuum robots, including the cable-driven continuum robot, the parallel continuum robot, the concentric-tube robot, the flexible endoscope, and the pneumatic continuum robot. Our results demonstrate that the task-space error of heterogeneous continuum robots complies with the prescribed boundaries and converges to steady state in predefined time, which reveals the efficacy of the proposed control method.
连续机器人因其灵巧性和顺应性,在各个领域都具有巨大的应用价值和广阔的发展前景。要充分发挥其优势,为其开发一套有效、精确和稳健的控制系统至关重要。然而,连续体机器人控制的研究仍处于起步阶段,该领域还有许多问题尚未解决。本文特别关注异构连续机器人的任务空间行为和通用控制。首先,本文提出了一种控制器,用于实现具有预定义任务空间行为的连续机器人的运动控制和视觉伺服。预定义行为包括两个方面:规定的任务空间误差和预定义的收敛时间。然后,将所提出的控制器与速度级运动映射估计器相结合,得到一个适用于异构连续机器人的无模型控制系统。此外,还提出了一个可重新调整的性能函数,以确保所提控制系统在外部干扰下的有效性和鲁棒性。最后,基于异构连续机器人(包括电缆驱动连续机器人、并联连续机器人、同心管机器人、柔性内窥镜和气动连续机器人)进行了大量模拟和实验。结果表明,异构连续机器人的任务空间误差符合规定边界,并在预定时间内收敛到稳态,这揭示了所提控制方法的有效性。
{"title":"Behavior-predefined adaptive control for heterogeneous continuum robots","authors":"Ning Tan, Peng Yu, Xin Wang, Kai Huang","doi":"10.1177/02783649241259138","DOIUrl":"https://doi.org/10.1177/02783649241259138","url":null,"abstract":"Continuum robots have great application value and broad prospects in various fields due to their dexterity and compliance. To fully exploit their advantages, it is crucial to develop an effective, accurate and robust control system for them. However, research on continuum robot control is still in its infancy and there are many problems remaining unsolved in this field. In particular, this paper focuses on the task-space behavior and the generic control of heterogeneous continuum robots. First, a controller is proposed to achieve the kinematic motion control and visual servoing of continuum robots with predefined task-space behavior. The predefined behavior is twofold: prescribed task-space error and predefined convergence time. Then, the proposed controller is integrated with a velocity-level kinematic mapping estimator to obtain a model-free control system, which is applicable to heterogeneous continuum robots. Furthermore, a re-adjustable performance function is proposed to ensure the effectiveness and robustness of the proposed control system in the presence of external disturbance. Finally, extensive simulations and experiments are performed based on heterogeneous continuum robots, including the cable-driven continuum robot, the parallel continuum robot, the concentric-tube robot, the flexible endoscope, and the pneumatic continuum robot. Our results demonstrate that the task-space error of heterogeneous continuum robots complies with the prescribed boundaries and converges to steady state in predefined time, which reveals the efficacy of the proposed control method.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"2018 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141511005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robot control based on motor primitives: A comparison of two approaches 基于电机基元的机器人控制:两种方法的比较
Pub Date : 2024-06-21 DOI: 10.1177/02783649241258782
Moses C. Nah, Johannes Lachner, Neville Hogan
Motor primitives are fundamental building blocks of a controller which enable dynamic robot behavior with minimal high-level intervention. By treating motor primitives as basic “modules,” different modules can be sequenced or superimposed to generate a rich repertoire of motor behavior. In robotics, two distinct approaches have been proposed: Dynamic Movement Primitives (DMPs) and Elementary Dynamic Actions (EDAs). While both approaches instantiate similar ideas, significant differences also exist. This paper attempts to clarify the distinction and provide a unifying view by delineating the similarities and differences between DMPs and EDAs. We provide nine robot control examples, including sequencing or superimposing movements, managing kinematic redundancy and singularity, control of both position and orientation of the robot’s end-effector, obstacle avoidance, and managing physical interaction. We show that the two approaches clearly diverge in their implementation. We also provide a real-robot demonstration to show how DMPs and EDAs can be combined to get the best of both approaches. With this detailed comparison, we enable researchers to make informed decisions to select the most suitable approach for specific robot tasks and applications.
电机基元是控制器的基本构件,只需最少的高级干预即可实现机器人的动态行为。将运动基元视为基本 "模块",不同的模块可以排序或叠加,从而产生丰富的运动行为。在机器人学中,已经提出了两种不同的方法:动态运动基元 (DMP) 和基本动态动作 (EDA)。虽然这两种方法体现了相似的理念,但也存在显著差异。本文试图澄清两者之间的区别,并通过划分 DMP 和 EDA 之间的异同提供一个统一的观点。我们提供了九个机器人控制实例,包括运动排序或叠加、管理运动学冗余和奇异性、控制机器人末端执行器的位置和方向、避开障碍物以及管理物理交互。我们表明,这两种方法在实施过程中存在明显差异。我们还提供了一个真实机器人演示,展示如何将 DMP 和 EDA 结合起来,以获得两种方法的最佳效果。通过这种详细的比较,我们使研究人员能够做出明智的决定,为特定的机器人任务和应用选择最合适的方法。
{"title":"Robot control based on motor primitives: A comparison of two approaches","authors":"Moses C. Nah, Johannes Lachner, Neville Hogan","doi":"10.1177/02783649241258782","DOIUrl":"https://doi.org/10.1177/02783649241258782","url":null,"abstract":"Motor primitives are fundamental building blocks of a controller which enable dynamic robot behavior with minimal high-level intervention. By treating motor primitives as basic “modules,” different modules can be sequenced or superimposed to generate a rich repertoire of motor behavior. In robotics, two distinct approaches have been proposed: Dynamic Movement Primitives (DMPs) and Elementary Dynamic Actions (EDAs). While both approaches instantiate similar ideas, significant differences also exist. This paper attempts to clarify the distinction and provide a unifying view by delineating the similarities and differences between DMPs and EDAs. We provide nine robot control examples, including sequencing or superimposing movements, managing kinematic redundancy and singularity, control of both position and orientation of the robot’s end-effector, obstacle avoidance, and managing physical interaction. We show that the two approaches clearly diverge in their implementation. We also provide a real-robot demonstration to show how DMPs and EDAs can be combined to get the best of both approaches. With this detailed comparison, we enable researchers to make informed decisions to select the most suitable approach for specific robot tasks and applications.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141530154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal potential shaping on SE(3) via neural ordinary differential equations on Lie groups 通过李群上的神经常微分方程实现 SE(3) 上的最佳势整形
Pub Date : 2024-06-14 DOI: 10.1177/02783649241256044
Yannik P. Wotte, Federico Califano, Stefano Stramigioli
This work presents a novel approach for the optimization of dynamic systems on finite-dimensional Lie groups. We rephrase dynamic systems as so-called neural ordinary differential equations (neural ODEs), and formulate the optimization problem on Lie groups. A gradient descent optimization algorithm is presented to tackle the optimization numerically. Our algorithm is scalable, and applicable to any finite-dimensional Lie group, including matrix Lie groups. By representing the system at the Lie algebra level, we reduce the computational cost of the gradient computation. In an extensive example, optimal potential energy shaping for control of a rigid body is treated. The optimal control problem is phrased as an optimization of a neural ODE on the Lie group SE(3), and the controller is iteratively optimized. The final controller is validated on a state-regulation task.
本研究提出了一种在有限维李群上优化动态系统的新方法。我们将动态系统重新表述为所谓的神经常微分方程(neural ODEs),并在李群上提出优化问题。我们提出了一种梯度下降优化算法,以数值方法解决优化问题。我们的算法是可扩展的,适用于任何有限维度的李群,包括矩阵李群。通过在李代数层面表示系统,我们降低了梯度计算的计算成本。在一个广泛的示例中,我们处理了控制刚体的最优势能整形问题。优化控制问题被表述为在李群 SE(3) 上对神经 ODE 的优化,控制器被迭代优化。最终控制器在状态调节任务中得到验证。
{"title":"Optimal potential shaping on SE(3) via neural ordinary differential equations on Lie groups","authors":"Yannik P. Wotte, Federico Califano, Stefano Stramigioli","doi":"10.1177/02783649241256044","DOIUrl":"https://doi.org/10.1177/02783649241256044","url":null,"abstract":"This work presents a novel approach for the optimization of dynamic systems on finite-dimensional Lie groups. We rephrase dynamic systems as so-called neural ordinary differential equations (neural ODEs), and formulate the optimization problem on Lie groups. A gradient descent optimization algorithm is presented to tackle the optimization numerically. Our algorithm is scalable, and applicable to any finite-dimensional Lie group, including matrix Lie groups. By representing the system at the Lie algebra level, we reduce the computational cost of the gradient computation. In an extensive example, optimal potential energy shaping for control of a rigid body is treated. The optimal control problem is phrased as an optimization of a neural ODE on the Lie group SE(3), and the controller is iteratively optimized. The final controller is validated on a state-regulation task.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"4 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141341068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ASIMO: Agent-centric scene representation in multi-object manipulation ASIMO:多物体操纵中以代理为中心的场景表示法
Pub Date : 2024-06-10 DOI: 10.1177/02783649241257537
Cheol-Hui Min, Young Min Kim
Vision-based reinforcement learning (RL) is a generalizable way to control an agent because it is agnostic of specific hardware configurations. As visual observations are highly entangled, attempts for vision-based RL rely on scene representation that discerns individual entities and establishes intuitive physics to constitute the world model. However, most existing works on scene representation learning cannot successfully be deployed to train an RL agent, as they are often highly unstable and fail to sustain for a long enough temporal horizon. We propose ASIMO, a fully unsupervised scene decomposition to perform interaction-rich tasks with a vision-based RL agent. ASIMO decomposes agent-object interaction videos of episodic-length into the agent, objects, and background, predicting their long-term interactions. Further, we explicitly model possible occlusion in the image observations and stably track individual objects. Then, we can correctly deduce the updated positions of individual entities in response to the agent action, only from partial visual observation. Based on the stable entity-wise decomposition and temporal prediction, we formulate a hierarchical framework to train the RL agent that focuses on the context around the object of interest. We demonstrate that our formulation for scene representation can be universally deployed to train different configurations of agents and accomplish several tasks that involve pushing, arranging, and placing multiple rigid objects.
基于视觉的强化学习(RL)是一种控制代理的通用方法,因为它与特定的硬件配置无关。由于视觉观察是高度纠缠的,因此基于视觉的强化学习依赖于场景表示法,它能辨别单个实体并建立直观的物理模型,从而构成世界模型。然而,大多数现有的场景表征学习工作都无法成功地用于训练 RL 代理,因为它们通常非常不稳定,无法维持足够长的时间跨度。我们提出的 ASIMO 是一种完全无监督的场景分解方法,用于与基于视觉的 RL 代理执行交互丰富的任务。ASIMO 将偶发长度的代理-对象交互视频分解为代理、对象和背景,并预测它们之间的长期交互。此外,我们对图像观测中可能存在的遮挡进行了明确建模,并对单个物体进行了稳定跟踪。这样,我们就能仅通过部分视觉观察,正确推断出单个实体响应代理动作的更新位置。在稳定的实体分解和时间预测的基础上,我们制定了一个分层框架来训练 RL 代理,该代理关注感兴趣物体周围的环境。我们证明,我们的场景表示方法可以普遍用于训练不同配置的代理,并完成涉及推动、排列和放置多个刚性物体的多项任务。
{"title":"ASIMO: Agent-centric scene representation in multi-object manipulation","authors":"Cheol-Hui Min, Young Min Kim","doi":"10.1177/02783649241257537","DOIUrl":"https://doi.org/10.1177/02783649241257537","url":null,"abstract":"Vision-based reinforcement learning (RL) is a generalizable way to control an agent because it is agnostic of specific hardware configurations. As visual observations are highly entangled, attempts for vision-based RL rely on scene representation that discerns individual entities and establishes intuitive physics to constitute the world model. However, most existing works on scene representation learning cannot successfully be deployed to train an RL agent, as they are often highly unstable and fail to sustain for a long enough temporal horizon. We propose ASIMO, a fully unsupervised scene decomposition to perform interaction-rich tasks with a vision-based RL agent. ASIMO decomposes agent-object interaction videos of episodic-length into the agent, objects, and background, predicting their long-term interactions. Further, we explicitly model possible occlusion in the image observations and stably track individual objects. Then, we can correctly deduce the updated positions of individual entities in response to the agent action, only from partial visual observation. Based on the stable entity-wise decomposition and temporal prediction, we formulate a hierarchical framework to train the RL agent that focuses on the context around the object of interest. We demonstrate that our formulation for scene representation can be universally deployed to train different configurations of agents and accomplish several tasks that involve pushing, arranging, and placing multiple rigid objects.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":" 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141365002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Set-valued rigid-body dynamics for simultaneous, inelastic, frictional impacts 非弹性摩擦同时撞击的集值刚体动力学
Pub Date : 2024-05-25 DOI: 10.1177/02783649241236860
Mathew Halm, Michael Posa
Robotic manipulation and locomotion often entail nearly-simultaneous collisions—such as heel and toe strikes during a foot step—with outcomes that are extremely sensitive to the order in which impacts occur. Robotic simulators and state estimation commonly lack the fidelity and accuracy to predict this ordering, and instead pick one with a heuristic. This discrepancy degrades performance when model-based controllers and policies learned in simulation are placed on a real robot. We reconcile this issue with a set-valued rigid-body model which generates a broad set of outcomes to simultaneous frictional impacts with any impact ordering. We first extend Routh’s impact model to multiple impacts by reformulating it as a differential inclusion (DI), and show that any solution will resolve all impacts in finite time. By considering time as a state, we embed this model into another DI which captures the continuous-time evolution of rigid-body dynamics, and guarantee existence of solutions. We finally cast simulation of simultaneous impacts as a linear complementarity problem (LCP), and develop an algorithm for tight approximation of the post-impact velocity set with probabilistic guarantees. We demonstrate our approach on several examples drawn from manipulation and legged locomotion, and compare the predictions to other models of rigid and compliant collisions.
机器人操纵和运动通常需要几乎同时发生碰撞,例如脚步过程中脚跟和脚趾的撞击,其结果对撞击发生的顺序极为敏感。机器人模拟器和状态估计通常缺乏预测这种顺序的保真度和准确性,而是采用启发式方法来选择一种顺序。当把在模拟中学习到的基于模型的控制器和策略应用到真实机器人上时,这种差异会降低性能。我们采用了一个集合值刚体模型来解决这个问题,该模型可对任意冲击顺序的同时摩擦冲击产生一系列广泛的结果。我们首先将 Routh 的撞击模型扩展到多重撞击,将其重新表述为微分包容(DI),并证明任何解决方案都能在有限时间内解决所有撞击。通过将时间视为一种状态,我们将该模型嵌入到另一个 DI 中,从而捕捉到刚体动力学的连续时间演化,并保证解的存在性。最后,我们将同时撞击的模拟视为线性互补问题(LCP),并开发了一种算法,以概率保证对撞击后速度集进行严格逼近。我们在操纵和腿部运动的几个例子中演示了我们的方法,并将预测结果与其他刚性和柔性碰撞模型进行了比较。
{"title":"Set-valued rigid-body dynamics for simultaneous, inelastic, frictional impacts","authors":"Mathew Halm, Michael Posa","doi":"10.1177/02783649241236860","DOIUrl":"https://doi.org/10.1177/02783649241236860","url":null,"abstract":"Robotic manipulation and locomotion often entail nearly-simultaneous collisions—such as heel and toe strikes during a foot step—with outcomes that are extremely sensitive to the order in which impacts occur. Robotic simulators and state estimation commonly lack the fidelity and accuracy to predict this ordering, and instead pick one with a heuristic. This discrepancy degrades performance when model-based controllers and policies learned in simulation are placed on a real robot. We reconcile this issue with a set-valued rigid-body model which generates a broad set of outcomes to simultaneous frictional impacts with any impact ordering. We first extend Routh’s impact model to multiple impacts by reformulating it as a differential inclusion (DI), and show that any solution will resolve all impacts in finite time. By considering time as a state, we embed this model into another DI which captures the continuous-time evolution of rigid-body dynamics, and guarantee existence of solutions. We finally cast simulation of simultaneous impacts as a linear complementarity problem (LCP), and develop an algorithm for tight approximation of the post-impact velocity set with probabilistic guarantees. We demonstrate our approach on several examples drawn from manipulation and legged locomotion, and compare the predictions to other models of rigid and compliant collisions.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141151467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spiral complete coverage path planning based on conformal slit mapping in multi-connected domains 基于多连接域中保形狭缝映射的螺旋完全覆盖路径规划
Pub Date : 2024-05-10 DOI: 10.1177/02783649241251385
Changqing Shen, Sihao Mao, Bingzhou Xu, Ziwei Wang, Xiaojian Zhang, Sijie Yan, Han Ding
The generation of smoother and shorter spiral complete coverage paths in multi-connected domains is a crucial research topic in path planning for robotic cavity machining and other related fields. Traditional methods for spiral path planning in multi-connected domains typically incorporate a subregion division procedure that leads to excessive subregion bridging, requiring longer, more sharply turning, and unevenly spaced spirals to achieve complete coverage. To address this issue, this paper proposes a novel spiral complete coverage path planning method using conformal slit mapping. It takes advantage of the fact that conformal slit mapping can transform multi-connected domains into regular disks or annuluses without the need for subregion division. Firstly, a slit mapping calculation technique is proposed for segmented cubic spline boundaries with corners. Secondly, a spiral path spacing control method is developed based on the maximum inscribed circle radius between adjacent conformal slit mapping iso-parameters. Thirdly, the spiral coverage path is derived by offsetting iso-parameters. Numerical experiments indicate that our method shares a comparable order-of-magnitude in computation time with the traditional PDE-based spiral complete coverage path method, but it excels in optimizing total path length, smoothness, and spacing consistency. Finally, we performed experiments on cavity milling and dry runs to compare the new method with the traditional PDE-based method in terms of machining duration and steering impact, respectively. The comparison reveals that, with both algorithms achieving complete coverage, the new algorithm reduces machining time and steering impact by 12.34% and 22.78%, respectively, compared with the traditional PDE-based method.
在多连接域中生成更平滑、更短的螺旋完整覆盖路径,是机器人型腔加工和其他相关领域路径规划的一个重要研究课题。在多连接域中进行螺旋路径规划的传统方法通常采用子区域划分程序,该程序会导致过度的子区域桥接,从而需要更长、更急转且间距不均的螺旋路径来实现完全覆盖。为解决这一问题,本文提出了一种使用保形狭缝映射的新型螺旋完全覆盖路径规划方法。该方法利用了保角狭缝映射可将多连接域转化为规则的圆盘或环形域而无需划分子区域的特点。首先,针对带角的分段立方样条边界提出了狭缝映射计算技术。其次,根据相邻保角狭缝映射等参数之间的最大内切圆半径,开发了一种螺旋路径间距控制方法。第三,通过等参数偏移得出螺旋覆盖路径。数值实验表明,我们的方法与传统的基于 PDE 的螺旋完全覆盖路径法计算时间相当,但在优化路径总长度、平滑度和间距一致性方面表现出色。最后,我们进行了空腔铣削和干运行实验,分别从加工持续时间和转向影响两个方面比较了新方法和传统的基于 PDE 的方法。比较结果表明,在两种算法都能实现完全覆盖的情况下,与传统的基于 PDE 的方法相比,新算法的加工时间和转向影响分别减少了 12.34% 和 22.78%。
{"title":"Spiral complete coverage path planning based on conformal slit mapping in multi-connected domains","authors":"Changqing Shen, Sihao Mao, Bingzhou Xu, Ziwei Wang, Xiaojian Zhang, Sijie Yan, Han Ding","doi":"10.1177/02783649241251385","DOIUrl":"https://doi.org/10.1177/02783649241251385","url":null,"abstract":"The generation of smoother and shorter spiral complete coverage paths in multi-connected domains is a crucial research topic in path planning for robotic cavity machining and other related fields. Traditional methods for spiral path planning in multi-connected domains typically incorporate a subregion division procedure that leads to excessive subregion bridging, requiring longer, more sharply turning, and unevenly spaced spirals to achieve complete coverage. To address this issue, this paper proposes a novel spiral complete coverage path planning method using conformal slit mapping. It takes advantage of the fact that conformal slit mapping can transform multi-connected domains into regular disks or annuluses without the need for subregion division. Firstly, a slit mapping calculation technique is proposed for segmented cubic spline boundaries with corners. Secondly, a spiral path spacing control method is developed based on the maximum inscribed circle radius between adjacent conformal slit mapping iso-parameters. Thirdly, the spiral coverage path is derived by offsetting iso-parameters. Numerical experiments indicate that our method shares a comparable order-of-magnitude in computation time with the traditional PDE-based spiral complete coverage path method, but it excels in optimizing total path length, smoothness, and spacing consistency. Finally, we performed experiments on cavity milling and dry runs to compare the new method with the traditional PDE-based method in terms of machining duration and steering impact, respectively. The comparison reveals that, with both algorithms achieving complete coverage, the new algorithm reduces machining time and steering impact by 12.34% and 22.78%, respectively, compared with the traditional PDE-based method.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"62 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140932321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Planning for heterogeneous teams of robots with temporal logic, capability, and resource constraints 为具有时间逻辑、能力和资源限制的异构机器人团队制定计划
Pub Date : 2024-04-29 DOI: 10.1177/02783649241247285
Gustavo A. Cardona, Cristian-Ioan Vasile
This paper presents a comprehensive approach for planning for teams of heterogeneous robots with different capabilities and the transportation of resources. We use Capability Temporal Logic (CaTL), a formal language that helps express tasks involving robots with multiple capabilities with spatial, temporal, and logical constraints. We extend CaTL to also capture resource constraints, where resources can be divisible and indivisible, for instance, sand and bricks, respectively. Robots transport resources using various storage types, such as uniform (shared storage among resources) and compartmental (individual storage per resource). Robots’ resource transportation capacity is defined based on resource type and robot class. Robot and resource dynamics and the CaTL mission are jointly encoded in a Mixed Integer Linear Programming (MILP), which maximizes disjoint robot and resource robustness while minimizing spurious movement of both. We propose a multi-robustness approach for Multi-Class Signal Temporal Logic (mcSTL), allowing for generalized quantitative semantics across multiple predicate classes. Thus, we compute availability robustness scores for robots and resources separately. Finally, we conduct multiple experiments demonstrating functionality and time performance by varying resources and storage types.
本文提出了一种综合方法,用于规划由具有不同能力的异构机器人组成的团队以及资源运输。我们使用能力时间逻辑(Capability Temporal Logic,CaTL),这是一种形式语言,可帮助表达涉及具有多种能力的机器人的任务,并带有空间、时间和逻辑约束。我们对 CaTL 进行了扩展,使其也能捕捉资源约束,其中资源可以是可分的,也可以是不可分的,例如沙子和砖块。机器人使用各种存储类型运输资源,如统一存储(资源间共享存储)和分区存储(每种资源单独存储)。机器人的资源运输能力是根据资源类型和机器人等级确定的。机器人和资源的动态以及 CaTL 任务被共同编码为混合整数线性规划 (MILP),该规划最大限度地提高了机器人和资源的鲁棒性,同时最大限度地减少了两者的假动作。我们为多类信号时态逻辑(mcSTL)提出了一种多稳健性方法,允许在多个谓词类中使用广义的定量语义。因此,我们分别计算机器人和资源的可用性鲁棒性得分。最后,我们进行了多个实验,通过改变资源和存储类型来展示功能和时间性能。
{"title":"Planning for heterogeneous teams of robots with temporal logic, capability, and resource constraints","authors":"Gustavo A. Cardona, Cristian-Ioan Vasile","doi":"10.1177/02783649241247285","DOIUrl":"https://doi.org/10.1177/02783649241247285","url":null,"abstract":"This paper presents a comprehensive approach for planning for teams of heterogeneous robots with different capabilities and the transportation of resources. We use Capability Temporal Logic (CaTL), a formal language that helps express tasks involving robots with multiple capabilities with spatial, temporal, and logical constraints. We extend CaTL to also capture resource constraints, where resources can be divisible and indivisible, for instance, sand and bricks, respectively. Robots transport resources using various storage types, such as uniform (shared storage among resources) and compartmental (individual storage per resource). Robots’ resource transportation capacity is defined based on resource type and robot class. Robot and resource dynamics and the CaTL mission are jointly encoded in a Mixed Integer Linear Programming (MILP), which maximizes disjoint robot and resource robustness while minimizing spurious movement of both. We propose a multi-robustness approach for Multi-Class Signal Temporal Logic (mcSTL), allowing for generalized quantitative semantics across multiple predicate classes. Thus, we compute availability robustness scores for robots and resources separately. Finally, we conduct multiple experiments demonstrating functionality and time performance by varying resources and storage types.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140832008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reactive optimal motion planning to anywhere in the presence of moving obstacles 在有移动障碍物的情况下,对任何地方进行反应式优化运动规划
Pub Date : 2024-04-23 DOI: 10.1177/02783649241245729
Panagiotis Rousseas, Charalampos P. Bechlioulis, Kostas Kyriakopoulos
In this paper, a novel optimal motion planning framework that enables navigating optimally from any initial, to any final position within confined workspaces with convex, moving obstacles is presented. Our method outputs a smooth velocity vector field, which is then employed as a reference controller in order to sub-optimally avoid moving obstacles. The proposed approach leverages and extends desirable properties of reactive methods in order to provide a provably convergent and safe solution. Our algorithm is evaluated with both static and moving obstacles in synthetic environments and is compared against a variety of existing methods. The efficacy and applicability of the proposed scheme is finally validated in a high-fidelity simulation environment.
本文介绍了一种新颖的最优运动规划框架,它能在有凸面移动障碍物的狭窄工作空间内,从任意初始位置最优地导航到任意最终位置。我们的方法会输出一个平滑的速度矢量场,然后将其用作参考控制器,以次优方式避开移动障碍物。所提出的方法利用并扩展了反应式方法的理想特性,从而提供了可证明的收敛性安全解决方案。我们利用合成环境中的静态和移动障碍物对算法进行了评估,并与多种现有方法进行了比较。最后在高保真模拟环境中验证了所提方案的有效性和适用性。
{"title":"Reactive optimal motion planning to anywhere in the presence of moving obstacles","authors":"Panagiotis Rousseas, Charalampos P. Bechlioulis, Kostas Kyriakopoulos","doi":"10.1177/02783649241245729","DOIUrl":"https://doi.org/10.1177/02783649241245729","url":null,"abstract":"In this paper, a novel optimal motion planning framework that enables navigating optimally from any initial, to any final position within confined workspaces with convex, moving obstacles is presented. Our method outputs a smooth velocity vector field, which is then employed as a reference controller in order to sub-optimally avoid moving obstacles. The proposed approach leverages and extends desirable properties of reactive methods in order to provide a provably convergent and safe solution. Our algorithm is evaluated with both static and moving obstacles in synthetic environments and is compared against a variety of existing methods. The efficacy and applicability of the proposed scheme is finally validated in a high-fidelity simulation environment.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"30 21","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140672081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimotion visual odometry 多运动视觉里程测量
Pub Date : 2024-04-18 DOI: 10.1177/02783649241229095
Kevin M. Judd, Jonathan D. Gammell
Visual motion estimation is a well-studied challenge in autonomous navigation. Recent work has focused on addressing multimotion estimation in highly dynamic environments. These environments not only comprise multiple, complex motions but also tend to exhibit significant occlusion. Estimating third-party motions simultaneously with the sensor egomotion is difficult because an object’s observed motion consists of both its true motion and the sensor motion. Most previous works in multimotion estimation simplify this problem by relying on appearance-based object detection or application-specific motion constraints. These approaches are effective in specific applications and environments but do not generalize well to the full multimotion estimation problem (MEP). This paper presents Multimotion Visual Odometry (MVO), a multimotion estimation pipeline that estimates the full SE(3) trajectory of every motion in the scene, including the sensor egomotion, without relying on appearance-based information. MVO extends the traditional visual odometry (VO) pipeline with multimotion segmentation and tracking techniques. It uses physically founded motion priors to extrapolate motions through temporary occlusions and identify the reappearance of motions through motion closure. Evaluations on real-world data from the Oxford Multimotion Dataset (OMD) and the KITTI Vision Benchmark Suite demonstrate that MVO achieves good estimation accuracy compared to similar approaches and is applicable to a variety of multimotion estimation challenges.
视觉运动估算是自主导航中的一项挑战。最近的工作重点是解决高动态环境中的多运动估计问题。这些环境不仅包含多种复杂运动,而且往往会出现严重遮挡。同时估计第三方运动和传感器自我运动非常困难,因为观察到的物体运动包括物体的真实运动和传感器运动。以往大多数多运动估计方法都是通过基于外观的物体检测或特定应用的运动约束来简化这一问题。这些方法在特定应用和环境中很有效,但不能很好地推广到完整的多运动估计问题(MEP)中。本文介绍了多运动视觉轨迹测量法(MVO),这是一种多运动估算管道,可估算场景中每个运动的完整 SE(3) 轨迹,包括传感器的自我运动,而无需依赖基于外观的信息。MVO 利用多运动分割和跟踪技术扩展了传统的视觉里程计 (VO) 管道。它使用基于物理的运动先验来推断暂时闭塞的运动,并通过运动闭合来识别运动的再次出现。在牛津多运动数据集(OMD)和 KITTI 视觉基准套件的真实世界数据上进行的评估表明,与类似方法相比,MVO 实现了良好的估计精度,适用于各种多运动估计挑战。
{"title":"Multimotion visual odometry","authors":"Kevin M. Judd, Jonathan D. Gammell","doi":"10.1177/02783649241229095","DOIUrl":"https://doi.org/10.1177/02783649241229095","url":null,"abstract":"Visual motion estimation is a well-studied challenge in autonomous navigation. Recent work has focused on addressing multimotion estimation in highly dynamic environments. These environments not only comprise multiple, complex motions but also tend to exhibit significant occlusion. Estimating third-party motions simultaneously with the sensor egomotion is difficult because an object’s observed motion consists of both its true motion and the sensor motion. Most previous works in multimotion estimation simplify this problem by relying on appearance-based object detection or application-specific motion constraints. These approaches are effective in specific applications and environments but do not generalize well to the full multimotion estimation problem (MEP). This paper presents Multimotion Visual Odometry (MVO), a multimotion estimation pipeline that estimates the full SE(3) trajectory of every motion in the scene, including the sensor egomotion, without relying on appearance-based information. MVO extends the traditional visual odometry (VO) pipeline with multimotion segmentation and tracking techniques. It uses physically founded motion priors to extrapolate motions through temporary occlusions and identify the reappearance of motions through motion closure. Evaluations on real-world data from the Oxford Multimotion Dataset (OMD) and the KITTI Vision Benchmark Suite demonstrate that MVO achieves good estimation accuracy compared to similar approaches and is applicable to a variety of multimotion estimation challenges.","PeriodicalId":501362,"journal":{"name":"The International Journal of Robotics Research","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140623537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
The International Journal of Robotics Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1