首页 > 最新文献

International Journal of Robotics Research最新文献

英文 中文
Locally active globally stable dynamical systems: Theory, learning, and experiments 局部活动全局稳定动力系统:理论、学习和实验
IF 9.2 1区 计算机科学 Q1 ROBOTICS Pub Date : 2022-01-27 DOI: 10.1177/02783649211030952
Nadia Figueroa, A. Billard
State-dependent dynamical systems (DSs) offer adaptivity, reactivity, and robustness to perturbations in motion planning and physical human–robot interaction tasks. Learning DS-based motion plans from non-linear reference trajectories is an active research area in robotics. Most approaches focus on learning DSs that can (i) accurately mimic the demonstrated motion, while (ii) ensuring convergence to the target, i.e., they are globally asymptotically (or exponentially) stable. When subject to perturbations, a compliant robot guided with a DS will continue following the next integral curves of the DS towards the target. If the task requires the robot to track a specific reference trajectory, this approach will fail. To alleviate this shortcoming, we propose the locally active globally stable DS (LAGS-DS), a novel DS formulation that provides both global convergence and stiffness-like symmetric attraction behaviors around a reference trajectory in regions of the state space where trajectory tracking is important. This allows for a unified approach towards motion and impedance encoding in a single DS-based motion model, i.e., stiffness is embedded in the DS. To learn LAGS-DS from demonstrations we propose a learning strategy based on Bayesian non-parametric Gaussian mixture models, Gaussian processes, and a sequence of constrained optimization problems that ensure estimation of stable DS parameters via Lyapunov theory. We experimentally validated LAGS-DS on writing tasks with a KUKA LWR 4+ arm and on navigation and co-manipulation tasks with iCub humanoid robots.
状态相关动力学系统(DS)在运动规划和物理人机交互任务中提供了对扰动的自适应性、反应性和鲁棒性。从非线性参考轨迹中学习基于DS的运动计划是机器人领域的一个活跃研究领域。大多数方法专注于学习DS,这些DS可以(i)准确地模拟所演示的运动,同时(ii)确保收敛到目标,即它们全局渐近(或指数)稳定。当受到扰动时,由DS引导的柔顺机器人将继续沿着DS的下一条积分曲线朝向目标。如果任务需要机器人跟踪特定的参考轨迹,这种方法将失败。为了缓解这一缺点,我们提出了局部主动全局稳定DS(LAGS-DS),这是一种新的DS公式,它在轨迹跟踪很重要的状态空间区域中,围绕参考轨迹提供全局收敛和类刚度对称吸引行为。这允许在单个基于DS的运动模型中对运动和阻抗编码采用统一的方法,即在DS中嵌入刚度。为了从演示中学习LAGS-DS,我们提出了一种基于贝叶斯非参数高斯混合模型、高斯过程、,以及一系列约束优化问题,这些问题确保通过李雅普诺夫理论估计稳定的DS参数。我们在使用KUKA LWR 4+手臂的书写任务以及使用iCub人形机器人的导航和协同操作任务上对LAGS-DS进行了实验验证。
{"title":"Locally active globally stable dynamical systems: Theory, learning, and experiments","authors":"Nadia Figueroa, A. Billard","doi":"10.1177/02783649211030952","DOIUrl":"https://doi.org/10.1177/02783649211030952","url":null,"abstract":"State-dependent dynamical systems (DSs) offer adaptivity, reactivity, and robustness to perturbations in motion planning and physical human–robot interaction tasks. Learning DS-based motion plans from non-linear reference trajectories is an active research area in robotics. Most approaches focus on learning DSs that can (i) accurately mimic the demonstrated motion, while (ii) ensuring convergence to the target, i.e., they are globally asymptotically (or exponentially) stable. When subject to perturbations, a compliant robot guided with a DS will continue following the next integral curves of the DS towards the target. If the task requires the robot to track a specific reference trajectory, this approach will fail. To alleviate this shortcoming, we propose the locally active globally stable DS (LAGS-DS), a novel DS formulation that provides both global convergence and stiffness-like symmetric attraction behaviors around a reference trajectory in regions of the state space where trajectory tracking is important. This allows for a unified approach towards motion and impedance encoding in a single DS-based motion model, i.e., stiffness is embedded in the DS. To learn LAGS-DS from demonstrations we propose a learning strategy based on Bayesian non-parametric Gaussian mixture models, Gaussian processes, and a sequence of constrained optimization problems that ensure estimation of stable DS parameters via Lyapunov theory. We experimentally validated LAGS-DS on writing tasks with a KUKA LWR 4+ arm and on navigation and co-manipulation tasks with iCub humanoid robots.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":"41 1","pages":"312 - 347"},"PeriodicalIF":9.2,"publicationDate":"2022-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45850650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Inducing structure in reward learning by learning features 基于学习特征的奖励学习诱导结构
IF 9.2 1区 计算机科学 Q1 ROBOTICS Pub Date : 2022-01-18 DOI: 10.1177/02783649221078031
Andreea Bobu, Marius Wiggert, C. Tomlin, A. Dragan
Reward learning enables robots to learn adaptable behaviors from human input. Traditional methods model the reward as a linear function of hand-crafted features, but that requires specifying all the relevant features a priori, which is impossible for real-world tasks. To get around this issue, recent deep Inverse Reinforcement Learning (IRL) methods learn rewards directly from the raw state but this is challenging because the robot has to implicitly learn the features that are important and how to combine them, simultaneously. Instead, we propose a divide-and-conquer approach: focus human input specifically on learning the features separately, and only then learn how to combine them into a reward. We introduce a novel type of human input for teaching features and an algorithm that utilizes it to learn complex features from the raw state space. The robot can then learn how to combine them into a reward using demonstrations, corrections, or other reward learning frameworks. We demonstrate our method in settings where all features have to be learned from scratch, as well as where some of the features are known. By first focusing human input specifically on the feature(s), our method decreases sample complexity and improves generalization of the learned reward over a deep IRL baseline. We show this in experiments with a physical 7-DoF robot manipulator, and in a user study conducted in a simulated environment.
奖励学习使机器人能够从人类输入中学习适应性行为。传统方法将奖励建模为手工制作的特征的线性函数,但这需要指定所有先验的相关特征,这对于现实世界的任务是不可能的。为了解决这个问题,最近的深度逆强化学习(IRL)方法直接从原始状态学习奖励,但这是具有挑战性的,因为机器人必须隐式学习重要的特征以及如何同时将它们组合起来。相反,我们提出了一种分而治之的方法:将人类输入的重点放在单独学习特征上,然后再学习如何将它们组合成一个奖励。我们为教学特征引入了一种新型的人工输入,并利用它从原始状态空间学习复杂特征的算法。然后,机器人可以学习如何通过演示、纠正或其他奖励学习框架将它们组合成奖励。我们在所有特征都必须从头开始学习的设置中演示我们的方法,以及一些已知特征的设置。通过首先将人类输入集中在特征上,我们的方法降低了样本复杂性,并在深度IRL基线上提高了学习奖励的泛化。我们在物理7-DoF机器人机械手的实验中证明了这一点,并在模拟环境中进行了用户研究。
{"title":"Inducing structure in reward learning by learning features","authors":"Andreea Bobu, Marius Wiggert, C. Tomlin, A. Dragan","doi":"10.1177/02783649221078031","DOIUrl":"https://doi.org/10.1177/02783649221078031","url":null,"abstract":"Reward learning enables robots to learn adaptable behaviors from human input. Traditional methods model the reward as a linear function of hand-crafted features, but that requires specifying all the relevant features a priori, which is impossible for real-world tasks. To get around this issue, recent deep Inverse Reinforcement Learning (IRL) methods learn rewards directly from the raw state but this is challenging because the robot has to implicitly learn the features that are important and how to combine them, simultaneously. Instead, we propose a divide-and-conquer approach: focus human input specifically on learning the features separately, and only then learn how to combine them into a reward. We introduce a novel type of human input for teaching features and an algorithm that utilizes it to learn complex features from the raw state space. The robot can then learn how to combine them into a reward using demonstrations, corrections, or other reward learning frameworks. We demonstrate our method in settings where all features have to be learned from scratch, as well as where some of the features are known. By first focusing human input specifically on the feature(s), our method decreases sample complexity and improves generalization of the learned reward over a deep IRL baseline. We show this in experiments with a physical 7-DoF robot manipulator, and in a user study conducted in a simulated environment.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":"41 1","pages":"497 - 518"},"PeriodicalIF":9.2,"publicationDate":"2022-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45495193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Self-supervised learning for using overhead imagery as maps in outdoor range sensor localization. 在室外范围传感器定位中使用高空图像作为地图的自监督学习。
IF 9.2 1区 计算机科学 Q1 ROBOTICS Pub Date : 2021-12-01 Epub Date: 2021-09-28 DOI: 10.1177/02783649211045736
Tim Y Tang, Daniele De Martini, Shangzhe Wu, Paul Newman

Traditional approaches to outdoor vehicle localization assume a reliable, prior map is available, typically built using the same sensor suite as the on-board sensors used during localization. This work makes a different assumption. It assumes that an overhead image of the workspace is available and utilizes that as a map for use for range-based sensor localization by a vehicle. Here, range-based sensors are radars and lidars. Our motivation is simple, off-the-shelf, publicly available overhead imagery such as Google satellite images can be a ubiquitous, cheap, and powerful tool for vehicle localization when a usable prior sensor map is unavailable, inconvenient, or expensive. The challenge to be addressed is that overhead images are clearly not directly comparable to data from ground range sensors because of their starkly different modalities. We present a learned metric localization method that not only handles the modality difference, but is also cheap to train, learning in a self-supervised fashion without requiring metrically accurate ground truth. By evaluating across multiple real-world datasets, we demonstrate the robustness and versatility of our method for various sensor configurations in cross-modality localization, achieving localization errors on-par with a prior supervised approach while requiring no pixel-wise aligned ground truth for supervision at training. We pay particular attention to the use of millimeter-wave radar, which, owing to its complex interaction with the scene and its immunity to weather and lighting conditions, makes for a compelling and valuable use case.

传统的室外车辆定位方法假设有可靠的先验地图,通常使用与定位过程中使用的车载传感器相同的传感器套件构建。这项工作的假设有所不同。它假定工作区的俯瞰图像可用,并将其用作地图,用于车辆基于测距传感器的定位。在这里,基于测距的传感器是雷达和激光雷达。我们的动机很简单,当可用的先前传感器地图不可用、不方便或昂贵时,谷歌卫星图像等现成的、可公开获取的高空图像可以成为一种无处不在、廉价而强大的车辆定位工具。需要解决的难题是,高空图像显然不能直接与地面测距传感器的数据进行比较,因为它们的模式截然不同。我们提出了一种学习度量定位方法,它不仅能处理模态差异,而且训练成本低,能以自我监督的方式学习,无需度量精确的地面实况。通过对多个真实世界数据集的评估,我们证明了我们的方法在跨模态定位中对各种传感器配置的鲁棒性和通用性,其定位误差与先前的监督方法相当,同时在训练时不需要像素对齐的地面实况进行监督。我们特别关注毫米波雷达的使用,由于其与场景的复杂互动以及不受天气和光照条件的影响,毫米波雷达是一个引人注目且有价值的应用案例。
{"title":"Self-supervised learning for using overhead imagery as maps in outdoor range sensor localization.","authors":"Tim Y Tang, Daniele De Martini, Shangzhe Wu, Paul Newman","doi":"10.1177/02783649211045736","DOIUrl":"10.1177/02783649211045736","url":null,"abstract":"<p><p>Traditional approaches to outdoor vehicle localization assume a reliable, prior map is available, typically built using the same sensor suite as the on-board sensors used during localization. This work makes a different assumption. It assumes that an overhead image of the workspace is available and utilizes that as a map for use for range-based sensor localization by a vehicle. Here, range-based sensors are radars and lidars. Our motivation is simple, off-the-shelf, publicly available overhead imagery such as Google satellite images can be a ubiquitous, cheap, and powerful tool for vehicle localization when a usable prior sensor map is unavailable, inconvenient, or expensive. The challenge to be addressed is that overhead images are clearly not directly comparable to data from ground range sensors because of their starkly different modalities. We present a learned metric localization method that not only handles the modality difference, but is also cheap to train, learning in a self-supervised fashion without requiring metrically accurate ground truth. By evaluating across multiple real-world datasets, we demonstrate the robustness and versatility of our method for various sensor configurations in cross-modality localization, achieving localization errors on-par with a prior supervised approach while requiring no pixel-wise aligned ground truth for supervision at training. We pay particular attention to the use of millimeter-wave radar, which, owing to its complex interaction with the scene and its immunity to weather and lighting conditions, makes for a compelling and valuable use case.</p>","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":"40 12-14","pages":"1488-1509"},"PeriodicalIF":9.2,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8721700/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39904384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning to solve sequential physical reasoning problems from a scene image 学习从场景图像中解决顺序物理推理问题
IF 9.2 1区 计算机科学 Q1 ROBOTICS Pub Date : 2021-12-01 DOI: 10.1177/02783649211056967
Danny Driess, Jung-Su Ha, Marc Toussaint
In this article, we propose deep visual reasoning, which is a convolutional recurrent neural network that predicts discrete action sequences from an initial scene image for sequential manipulation problems that arise, for example, in task and motion planning (TAMP). Typical TAMP problems are formalized by combining reasoning on a symbolic, discrete level (e.g., first-order logic) with continuous motion planning such as nonlinear trajectory optimization. The action sequences represent the discrete decisions on a symbolic level, which, in turn, parameterize a nonlinear trajectory optimization problem. Owing to the great combinatorial complexity of possible discrete action sequences, a large number of optimization/motion planning problems have to be solved to find a solution, which limits the scalability of these approaches. To circumvent this combinatorial complexity, we introduce deep visual reasoning: based on a segmented initial image of the scene, a neural network directly predicts promising discrete action sequences such that ideally only one motion planning problem has to be solved to find a solution to the overall TAMP problem. Our method generalizes to scenes with many and varying numbers of objects, although being trained on only two objects at a time. This is possible by encoding the objects of the scene and the goal in (segmented) images as input to the neural network, instead of a fixed feature vector. We show that the framework can not only handle kinematic problems such as pick-and-place (as typical in TAMP), but also tool-use scenarios for planar pushing under quasi-static dynamic models. Here, the image-based representation enables generalization to other shapes than during training. Results show runtime improvements of several orders of magnitudes by, in many cases, removing the need to search over the discrete action sequences.
在本文中,我们提出了深度视觉推理,这是一种卷积递归神经网络,它从初始场景图像中预测离散的动作序列,以解决出现的顺序操作问题,例如在任务和运动规划(TAMP)中。典型的TAMP问题是通过将符号离散级别的推理(例如,一阶逻辑)与连续运动规划(如非线性轨迹优化)相结合来形式化的。动作序列表示符号级别上的离散决策,进而将非线性轨迹优化问题参数化。由于可能的离散动作序列具有极大的组合复杂性,必须解决大量的优化/运动规划问题才能找到解决方案,这限制了这些方法的可扩展性。为了避免这种组合复杂性,我们引入了深度视觉推理:基于场景的分割初始图像,神经网络直接预测有希望的离散动作序列,因此理想情况下只需解决一个运动规划问题,就可以找到整个TAMP问题的解决方案。我们的方法适用于具有许多不同数量对象的场景,尽管一次只在两个对象上进行训练。这可以通过编码(分割的)图像中的场景对象和目标作为神经网络的输入来实现,而不是固定的特征向量。我们表明,该框架不仅可以处理运动学问题,如拾取和放置(如TAMP中的典型问题),还可以处理准静态动态模型下平面推送的工具使用场景。在这里,基于图像的表示能够泛化到训练期间以外的其他形状。结果显示,在许多情况下,通过消除在离散动作序列上搜索的需要,运行时改进了几个数量级。
{"title":"Learning to solve sequential physical reasoning problems from a scene image","authors":"Danny Driess, Jung-Su Ha, Marc Toussaint","doi":"10.1177/02783649211056967","DOIUrl":"https://doi.org/10.1177/02783649211056967","url":null,"abstract":"In this article, we propose deep visual reasoning, which is a convolutional recurrent neural network that predicts discrete action sequences from an initial scene image for sequential manipulation problems that arise, for example, in task and motion planning (TAMP). Typical TAMP problems are formalized by combining reasoning on a symbolic, discrete level (e.g., first-order logic) with continuous motion planning such as nonlinear trajectory optimization. The action sequences represent the discrete decisions on a symbolic level, which, in turn, parameterize a nonlinear trajectory optimization problem. Owing to the great combinatorial complexity of possible discrete action sequences, a large number of optimization/motion planning problems have to be solved to find a solution, which limits the scalability of these approaches. To circumvent this combinatorial complexity, we introduce deep visual reasoning: based on a segmented initial image of the scene, a neural network directly predicts promising discrete action sequences such that ideally only one motion planning problem has to be solved to find a solution to the overall TAMP problem. Our method generalizes to scenes with many and varying numbers of objects, although being trained on only two objects at a time. This is possible by encoding the objects of the scene and the goal in (segmented) images as input to the neural network, instead of a fixed feature vector. We show that the framework can not only handle kinematic problems such as pick-and-place (as typical in TAMP), but also tool-use scenarios for planar pushing under quasi-static dynamic models. Here, the image-based representation enables generalization to other shapes than during training. Results show runtime improvements of several orders of magnitudes by, in many cases, removing the need to search over the discrete action sequences.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":"40 1","pages":"1435 - 1466"},"PeriodicalIF":9.2,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49360154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Robotics: Science and Systems (RSS) 2020 机器人:科学与系统(RSS) 2020
IF 9.2 1区 计算机科学 Q1 ROBOTICS Pub Date : 2021-12-01 DOI: 10.1177/02783649211052346
T. Nanayakkara, T. Barfoot, T. Howard
{"title":"Robotics: Science and Systems (RSS) 2020","authors":"T. Nanayakkara, T. Barfoot, T. Howard","doi":"10.1177/02783649211052346","DOIUrl":"https://doi.org/10.1177/02783649211052346","url":null,"abstract":"","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":"40 1","pages":"1329 - 1330"},"PeriodicalIF":9.2,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41568869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Enabling impedance-based physical human–multi–robot collaboration: Experiments with four torque-controlled manipulators 实现基于阻抗的物理人-多机器人协作:四个扭矩控制机械手的实验
IF 9.2 1区 计算机科学 Q1 ROBOTICS Pub Date : 2021-11-24 DOI: 10.1177/02783649211053650
Niels Dehio, Joshua Smith, D. L. Wigand, Pouya Mohammadi, M. Mistry, Jochen J. Steil
Robotics research into multi-robot systems so far has concentrated on implementing intelligent swarm behavior and contact-less human interaction. Studies of haptic or physical human-robot interaction, by contrast, have primarily focused on the assistance offered by a single robot. Consequently, our understanding of the physical interaction and the implicit communication through contact forces between a human and a team of multiple collaborative robots is limited. We here introduce the term Physical Human Multi-Robot Collaboration (PHMRC) to describe this more complex situation, which we consider highly relevant in future service robotics. The scenario discussed in this article covers multiple manipulators in close proximity and coupled through physical contacts. We represent this set of robots as fingers of an up-scaled agile robot hand. This perspective enables us to employ model-based grasping theory to deal with multi-contact situations. Our torque-control approach integrates dexterous multi-manipulator grasping skills, optimization of contact forces, compensation of object dynamics, and advanced impedance regulation into a coherent compliant control scheme. For this to achieve, we contribute fundamental theoretical improvements. Finally, experiments with up to four collaborative KUKA LWR IV+ manipulators performed both in simulation and real world validate the model-based control approach. As a side effect, we notice that our multi-manipulator control framework applies identically to multi-legged systems, and we execute it also on the quadruped ANYmal subject to non-coplanar contacts and human interaction.
到目前为止,机器人对多机器人系统的研究主要集中在实现智能群体行为和无接触人机交互。相比之下,对触觉或物理人机交互的研究主要集中在单个机器人提供的帮助上。因此,我们对人类和多个协作机器人团队之间通过接触力进行的物理互动和隐含交流的理解是有限的。我们在这里引入了物理人多机器人协作(PHMRC)一词来描述这种更复杂的情况,我们认为这在未来的服务机器人中非常重要。本文中讨论的场景涵盖了通过物理接触紧密连接的多个操纵器。我们将这组机器人表示为放大的敏捷机械手的手指。这种观点使我们能够使用基于模型的抓取理论来处理多接触情况。我们的扭矩控制方法将灵巧的多机械手抓取技能、接触力的优化、物体动力学的补偿和先进的阻抗调节集成到一个连贯的柔顺控制方案中。为了实现这一点,我们贡献了基本的理论改进。最后,在模拟和现实世界中对多达四个合作的KUKA LWR IV+机械手进行的实验验证了基于模型的控制方法。作为副作用,我们注意到我们的多机械手控制框架同样适用于多足系统,我们也在非共面接触和人类互动的四足动物ANYmal上执行它。
{"title":"Enabling impedance-based physical human–multi–robot collaboration: Experiments with four torque-controlled manipulators","authors":"Niels Dehio, Joshua Smith, D. L. Wigand, Pouya Mohammadi, M. Mistry, Jochen J. Steil","doi":"10.1177/02783649211053650","DOIUrl":"https://doi.org/10.1177/02783649211053650","url":null,"abstract":"Robotics research into multi-robot systems so far has concentrated on implementing intelligent swarm behavior and contact-less human interaction. Studies of haptic or physical human-robot interaction, by contrast, have primarily focused on the assistance offered by a single robot. Consequently, our understanding of the physical interaction and the implicit communication through contact forces between a human and a team of multiple collaborative robots is limited. We here introduce the term Physical Human Multi-Robot Collaboration (PHMRC) to describe this more complex situation, which we consider highly relevant in future service robotics. The scenario discussed in this article covers multiple manipulators in close proximity and coupled through physical contacts. We represent this set of robots as fingers of an up-scaled agile robot hand. This perspective enables us to employ model-based grasping theory to deal with multi-contact situations. Our torque-control approach integrates dexterous multi-manipulator grasping skills, optimization of contact forces, compensation of object dynamics, and advanced impedance regulation into a coherent compliant control scheme. For this to achieve, we contribute fundamental theoretical improvements. Finally, experiments with up to four collaborative KUKA LWR IV+ manipulators performed both in simulation and real world validate the model-based control approach. As a side effect, we notice that our multi-manipulator control framework applies identically to multi-legged systems, and we execute it also on the quadruped ANYmal subject to non-coplanar contacts and human interaction.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":"41 1","pages":"68 - 84"},"PeriodicalIF":9.2,"publicationDate":"2021-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49246810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
GRSTAPS: Graphically Recursive Simultaneous Task Allocation, Planning, and Scheduling GRSTAPS:图形递归的同时任务分配、规划和调度
IF 9.2 1区 计算机科学 Q1 ROBOTICS Pub Date : 2021-11-09 DOI: 10.1177/02783649211052066
Andrew Messing, Glen Neville, S. Chernova, S. Hutchinson, H. Ravichandar
Effective deployment of multi-robot teams requires solving several interdependent problems at varying levels of abstraction. Specifically, heterogeneous multi-robot systems must answer four important questions: what (task planning), how (motion planning), who (task allocation), and when (scheduling). Although there are rich bodies of work dedicated to various combinations of these questions, a fully integrated treatment of all four questions lies beyond the scope of the current literature, which lacks even a formal description of the complete problem. In this article, we address this absence, first by formalizing this class of multi-robot problems under the banner Simultaneous Task Allocation and Planning with Spatiotemporal Constraints (STAP-STC), and then by proposing a solution that we call Graphically Recursive Simultaneous Task Allocation, Planning, and Scheduling (GRSTAPS). GRSTAPS interleaves task planning, task allocation, scheduling, and motion planning, performing a multi-layer search while effectively sharing information among system modules. In addition to providing a unified solution to STAP-STC problems, GRSTAPS includes individual innovations both in task planning and task allocation. At the task planning level, our interleaved approach allows the planner to abstract away which agents will perform a task using an approach that we refer to as agent-agnostic planning. At the task allocation level, we contribute a search-based algorithm that can simultaneously satisfy planning constraints and task requirements while optimizing the associated schedule. We demonstrate the efficacy of GRSTAPS using detailed ablative and comparative experiments in a simulated emergency-response domain. Results of these experiments conclusively demonstrate that GRSTAPS outperforms both ablative baselines and state-of-the-art temporal planners in terms of computation time, solution quality, and problem coverage.
多机器人团队的有效部署需要在不同的抽象层次上解决几个相互依赖的问题。具体来说,异构多机器人系统必须回答四个重要问题:什么(任务规划),如何(运动规划),谁(任务分配)和何时(调度)。尽管有大量的工作致力于这些问题的各种组合,但对所有四个问题的全面综合处理超出了当前文献的范围,甚至缺乏对完整问题的正式描述。在本文中,我们首先将这类多机器人问题形式化,称为具有时空约束的同步任务分配和规划(STAP-STC),然后提出一种解决方案,我们称之为图形递归同步任务分配、规划和调度(GRSTAPS)。GRSTAPS将任务规划、任务分配、调度和运动规划穿插在一起,在执行多层搜索的同时有效地在系统模块之间共享信息。GRSTAPS除了提供STAP-STC问题的统一解决方案外,还包括任务规划和任务分配方面的个人创新。在任务规划级别,我们的交错方法允许规划器抽象出哪些代理将使用我们称为代理不可知规划的方法执行任务。在任务分配层面,我们提出了一种基于搜索的算法,可以同时满足规划约束和任务需求,同时优化相关的调度。我们在模拟应急响应领域使用详细的烧蚀和比较实验来证明GRSTAPS的有效性。这些实验结果最终表明,GRSTAPS在计算时间、解决质量和问题覆盖率方面优于烧蚀基线和最先进的时间规划。
{"title":"GRSTAPS: Graphically Recursive Simultaneous Task Allocation, Planning, and Scheduling","authors":"Andrew Messing, Glen Neville, S. Chernova, S. Hutchinson, H. Ravichandar","doi":"10.1177/02783649211052066","DOIUrl":"https://doi.org/10.1177/02783649211052066","url":null,"abstract":"Effective deployment of multi-robot teams requires solving several interdependent problems at varying levels of abstraction. Specifically, heterogeneous multi-robot systems must answer four important questions: what (task planning), how (motion planning), who (task allocation), and when (scheduling). Although there are rich bodies of work dedicated to various combinations of these questions, a fully integrated treatment of all four questions lies beyond the scope of the current literature, which lacks even a formal description of the complete problem. In this article, we address this absence, first by formalizing this class of multi-robot problems under the banner Simultaneous Task Allocation and Planning with Spatiotemporal Constraints (STAP-STC), and then by proposing a solution that we call Graphically Recursive Simultaneous Task Allocation, Planning, and Scheduling (GRSTAPS). GRSTAPS interleaves task planning, task allocation, scheduling, and motion planning, performing a multi-layer search while effectively sharing information among system modules. In addition to providing a unified solution to STAP-STC problems, GRSTAPS includes individual innovations both in task planning and task allocation. At the task planning level, our interleaved approach allows the planner to abstract away which agents will perform a task using an approach that we refer to as agent-agnostic planning. At the task allocation level, we contribute a search-based algorithm that can simultaneously satisfy planning constraints and task requirements while optimizing the associated schedule. We demonstrate the efficacy of GRSTAPS using detailed ablative and comparative experiments in a simulated emergency-response domain. Results of these experiments conclusively demonstrate that GRSTAPS outperforms both ablative baselines and state-of-the-art temporal planners in terms of computation time, solution quality, and problem coverage.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":"41 1","pages":"232 - 256"},"PeriodicalIF":9.2,"publicationDate":"2021-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47403327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
NTU VIRAL: A visual-inertial-ranging-lidar dataset, from an aerial vehicle viewpoint NTU VIRAL:从飞行器视角的视觉-惯性测距-激光雷达数据集
IF 9.2 1区 计算机科学 Q1 ROBOTICS Pub Date : 2021-11-06 DOI: 10.1177/02783649211052312
Thien-Minh Nguyen, Shenghai Yuan, Muqing Cao, Yang Lyu, T. Nguyen, Lihua Xie
In recent years, autonomous robots have become ubiquitous in research and daily life. Among many factors, public datasets play an important role in the progress of this field, as they waive the tall order of initial investment in hardware and manpower. However, for research on autonomous aerial systems, there appears to be a relative lack of public datasets on par with those used for autonomous driving and ground robots. Thus, to fill in this gap, we conduct a data collection exercise on an aerial platform equipped with an extensive and unique set of sensors: two 3D lidars, two hardware-synchronized global-shutter cameras, multiple Inertial Measurement Units (IMUs), and especially, multiple Ultra-wideband (UWB) ranging units. The comprehensive sensor suite resembles that of an autonomous driving car, but features distinct and challenging characteristics of aerial operations. We record multiple datasets in several challenging indoor and outdoor conditions. Calibration results and ground truth from a high-accuracy laser tracker are also included in each package. All resources can be accessed via our webpage https://ntu-aris.github.io/ntu_viral_dataset/.
近年来,自主机器人在研究和日常生活中无处不在。在众多因素中,公共数据集在这一领域的发展中发挥着重要作用,因为它们放弃了在硬件和人力方面的高额初始投资。然而,对于自主航空系统的研究,似乎相对缺乏与自动驾驶和地面机器人相同的公共数据集。因此,为了填补这一空白,我们在配备了广泛而独特的传感器的空中平台上进行了数据收集练习:两个3D激光雷达,两个硬件同步全局快门相机,多个惯性测量单元(imu),特别是多个超宽带(UWB)测距单元。综合传感器套件类似于自动驾驶汽车,但具有独特且具有挑战性的空中操作特性。我们在几个具有挑战性的室内和室外条件下记录多个数据集。校准结果和地面真相从高精度激光跟踪器也包括在每个包。所有资源都可以通过我们的网页https://ntu-aris.github.io/ntu_viral_dataset/访问。
{"title":"NTU VIRAL: A visual-inertial-ranging-lidar dataset, from an aerial vehicle viewpoint","authors":"Thien-Minh Nguyen, Shenghai Yuan, Muqing Cao, Yang Lyu, T. Nguyen, Lihua Xie","doi":"10.1177/02783649211052312","DOIUrl":"https://doi.org/10.1177/02783649211052312","url":null,"abstract":"In recent years, autonomous robots have become ubiquitous in research and daily life. Among many factors, public datasets play an important role in the progress of this field, as they waive the tall order of initial investment in hardware and manpower. However, for research on autonomous aerial systems, there appears to be a relative lack of public datasets on par with those used for autonomous driving and ground robots. Thus, to fill in this gap, we conduct a data collection exercise on an aerial platform equipped with an extensive and unique set of sensors: two 3D lidars, two hardware-synchronized global-shutter cameras, multiple Inertial Measurement Units (IMUs), and especially, multiple Ultra-wideband (UWB) ranging units. The comprehensive sensor suite resembles that of an autonomous driving car, but features distinct and challenging characteristics of aerial operations. We record multiple datasets in several challenging indoor and outdoor conditions. Calibration results and ground truth from a high-accuracy laser tracker are also included in each package. All resources can be accessed via our webpage https://ntu-aris.github.io/ntu_viral_dataset/.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":"41 1","pages":"270 - 280"},"PeriodicalIF":9.2,"publicationDate":"2021-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47219948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
A large-scale dataset for indoor visual localization with high-precision ground truth 基于高精度地面真值的大规模室内视觉定位数据集
IF 9.2 1区 计算机科学 Q1 ROBOTICS Pub Date : 2021-10-26 DOI: 10.1177/02783649211052064
Yuchen Liu, Wei Gao, Zhanyi Hu
This article presents a challenging new dataset for indoor localization research. We have recorded the whole internal structure of Fengtai Wanda Plaza which is an area of over 15,800 m2 with a Navvis M6 device. The dataset contains 679 RGB-D panoramas and 2,664 query images collected by three different smartphones. In addition to the data, an aligned 3D point cloud is produced after the elimination of moving objects based on the building floorplan. Furthermore, a method is provided to generate corresponding high-resolution depth images for each panorama. By fixing the smartphones on the device using a specially designed bracket, six-degree-of-freedom camera poses can be calculated precisely. We believe it can give a new benchmark for indoor visual localization and the full dataset can be downloaded from http://vision.ia.ac.cn/Faculty/wgao/data_code/data_indoor_localizaiton/data_indoor_localization.htm
本文为室内定位研究提供了一个具有挑战性的新数据集。我们记录了丰台万达广场的整个内部结构,面积超过15800平方米,配有Navvis M6装置。该数据集包含三款不同智能手机收集的679幅RGB-D全景图和2664幅查询图像。除数据外,在基于建筑平面布置图消除移动对象后,还会生成对齐的三维点云。此外,提供了一种为每个全景生成相应的高分辨率深度图像的方法。通过使用专门设计的支架将智能手机固定在设备上,可以精确计算六自由度相机姿势。我们相信它可以为室内视觉定位提供一个新的基准,完整的数据集可以从http://vision.ia.ac.cn/Faculty/wgao/data_code/data_indoor_localizaiton/data_indoor_localization.htm
{"title":"A large-scale dataset for indoor visual localization with high-precision ground truth","authors":"Yuchen Liu, Wei Gao, Zhanyi Hu","doi":"10.1177/02783649211052064","DOIUrl":"https://doi.org/10.1177/02783649211052064","url":null,"abstract":"This article presents a challenging new dataset for indoor localization research. We have recorded the whole internal structure of Fengtai Wanda Plaza which is an area of over 15,800 m2 with a Navvis M6 device. The dataset contains 679 RGB-D panoramas and 2,664 query images collected by three different smartphones. In addition to the data, an aligned 3D point cloud is produced after the elimination of moving objects based on the building floorplan. Furthermore, a method is provided to generate corresponding high-resolution depth images for each panorama. By fixing the smartphones on the device using a specially designed bracket, six-degree-of-freedom camera poses can be calculated precisely. We believe it can give a new benchmark for indoor visual localization and the full dataset can be downloaded from http://vision.ia.ac.cn/Faculty/wgao/data_code/data_indoor_localizaiton/data_indoor_localization.htm","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":"41 1","pages":"129 - 135"},"PeriodicalIF":9.2,"publicationDate":"2021-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44600365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Systematic object-invariant in-hand manipulation via reconfigurable underactuation: Introducing the RUTH gripper 通过可重构欠驱动的系统对象不变在手操作:介绍RUTH夹具
IF 9.2 1区 计算机科学 Q1 ROBOTICS Pub Date : 2021-10-22 DOI: 10.1177/02783649211048929
Qiujie Lu, Nicholas Baron, A. B. Clark, Nicolás Rojas
We introduce a reconfigurable underactuated robot hand able to perform systematic prehensile in-hand manipulations regardless of object size or shape. The hand utilizes a two-degree-of-freedom five-bar linkage as the palm of the gripper, with three three-phalanx underactuated fingers, jointly controlled by a single actuator, connected to the mobile revolute joints of the palm. Three actuators are used in the robot hand system in total, one for controlling the force exerted on objects by the fingers through an underactuated tendon system, and two for changing the configuration of the palm and, thus, the positioning of the fingers. This novel layout allows decoupling grasping and manipulation, facilitating the planning and execution of in-hand manipulation operations. The reconfigurable palm provides the hand with a large grasping versatility, and allows easy computation of a map between task space and joint space for manipulation based on distance-based linkage kinematics. The motion of objects of different sizes and shapes from one pose to another is then straightforward and systematic, provided the objects are kept grasped. This is guaranteed independently and passively by the underactuated fingers using a custom tendon routing method, which allows no tendon length variation when the relative finger base positions change with palm reconfigurations. We analyze the theoretical grasping workspace and grasping and manipulation capability of the hand, present algorithms for computing the manipulation map and in-hand manipulation planning, and evaluate all these experimentally. Numerical and empirical results of several manipulation trajectories with objects of different size and shape clearly demonstrate the viability of the proposed concept.
我们介绍了一种可重构的欠驱动机械手,无论物体大小或形状如何,都能进行系统的手内抓握操作。手部采用两自由度五杆联动装置作为夹持器的手掌,三个三指骨欠驱动的手指由单个致动器共同控制,连接到手掌的可移动旋转关节。在机械手系统中总共使用了三个致动器,一个用于控制手指通过欠驱动肌腱系统施加在物体上的力,两个用于改变手掌的配置,从而改变手指的位置。这种新颖的布局允许抓取和操纵解耦,便于手部操纵操作的规划和执行。可重新配置的手掌为手提供了很大的抓握通用性,并允许基于基于距离的连杆运动学轻松计算任务空间和关节空间之间的映射以进行操纵。不同大小和形状的物体从一个姿势到另一个姿势的运动是直接和系统的,前提是要抓住物体。这是由使用自定义肌腱布线方法的欠驱动手指独立和被动地保证的,当相对手指底部位置随着手掌重新配置而变化时,该方法不允许肌腱长度变化。我们分析了手的理论抓握工作空间和抓握和操纵能力,提出了计算操纵图和手内操纵计划的算法,并对所有这些进行了实验评估。对不同大小和形状物体的几种操纵轨迹的数值和经验结果清楚地证明了所提出概念的可行性。
{"title":"Systematic object-invariant in-hand manipulation via reconfigurable underactuation: Introducing the RUTH gripper","authors":"Qiujie Lu, Nicholas Baron, A. B. Clark, Nicolás Rojas","doi":"10.1177/02783649211048929","DOIUrl":"https://doi.org/10.1177/02783649211048929","url":null,"abstract":"We introduce a reconfigurable underactuated robot hand able to perform systematic prehensile in-hand manipulations regardless of object size or shape. The hand utilizes a two-degree-of-freedom five-bar linkage as the palm of the gripper, with three three-phalanx underactuated fingers, jointly controlled by a single actuator, connected to the mobile revolute joints of the palm. Three actuators are used in the robot hand system in total, one for controlling the force exerted on objects by the fingers through an underactuated tendon system, and two for changing the configuration of the palm and, thus, the positioning of the fingers. This novel layout allows decoupling grasping and manipulation, facilitating the planning and execution of in-hand manipulation operations. The reconfigurable palm provides the hand with a large grasping versatility, and allows easy computation of a map between task space and joint space for manipulation based on distance-based linkage kinematics. The motion of objects of different sizes and shapes from one pose to another is then straightforward and systematic, provided the objects are kept grasped. This is guaranteed independently and passively by the underactuated fingers using a custom tendon routing method, which allows no tendon length variation when the relative finger base positions change with palm reconfigurations. We analyze the theoretical grasping workspace and grasping and manipulation capability of the hand, present algorithms for computing the manipulation map and in-hand manipulation planning, and evaluate all these experimentally. Numerical and empirical results of several manipulation trajectories with objects of different size and shape clearly demonstrate the viability of the proposed concept.","PeriodicalId":54942,"journal":{"name":"International Journal of Robotics Research","volume":"40 1","pages":"1402 - 1418"},"PeriodicalIF":9.2,"publicationDate":"2021-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42035977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
International Journal of Robotics Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1