首页 > 最新文献

Conference on Robot Learning最新文献

英文 中文
In-Hand Object Rotation via Rapid Motor Adaptation 通过快速运动适应的手持物体旋转
Pub Date : 2022-10-10 DOI: 10.48550/arXiv.2210.04887
Haozhi Qi, Ashish Kumar, R. Calandra, Yinsong Ma, J. Malik
Generalized in-hand manipulation has long been an unsolved challenge of robotics. As a small step towards this grand goal, we demonstrate how to design and learn a simple adaptive controller to achieve in-hand object rotation using only fingertips. The controller is trained entirely in simulation on only cylindrical objects, which then - without any fine-tuning - can be directly deployed to a real robot hand to rotate dozens of objects with diverse sizes, shapes, and weights over the z-axis. This is achieved via rapid online adaptation of the controller to the object properties using only proprioception history. Furthermore, natural and stable finger gaits automatically emerge from training the control policy via reinforcement learning. Code and more videos are available at https://haozhi.io/hora
广义的手操作一直是机器人技术的一个未解决的挑战。作为实现这一宏伟目标的一小步,我们演示了如何设计和学习一个简单的自适应控制器,仅使用指尖即可实现手持物体旋转。控制器完全是在模拟中训练的,只有圆柱形物体,然后-不需要任何微调-可以直接部署到一个真正的机器人手上,在z轴上旋转几十个不同大小,形状和重量的物体。这是通过使用本体感觉历史快速在线适应控制器来实现的。此外,通过强化学习对控制策略进行训练,自动生成自然稳定的手指步态。代码和更多视频可在https://haozhi.io/hora获得
{"title":"In-Hand Object Rotation via Rapid Motor Adaptation","authors":"Haozhi Qi, Ashish Kumar, R. Calandra, Yinsong Ma, J. Malik","doi":"10.48550/arXiv.2210.04887","DOIUrl":"https://doi.org/10.48550/arXiv.2210.04887","url":null,"abstract":"Generalized in-hand manipulation has long been an unsolved challenge of robotics. As a small step towards this grand goal, we demonstrate how to design and learn a simple adaptive controller to achieve in-hand object rotation using only fingertips. The controller is trained entirely in simulation on only cylindrical objects, which then - without any fine-tuning - can be directly deployed to a real robot hand to rotate dozens of objects with diverse sizes, shapes, and weights over the z-axis. This is achieved via rapid online adaptation of the controller to the object properties using only proprioception history. Furthermore, natural and stable finger gaits automatically emerge from training the control policy via reinforcement learning. Code and more videos are available at https://haozhi.io/hora","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125704922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Learning the Dynamics of Compliant Tool-Environment Interaction for Visuo-Tactile Contact Servoing 视觉-触觉接触伺服系统柔顺工具-环境交互动力学研究
Pub Date : 2022-10-07 DOI: 10.48550/arXiv.2210.03836
Mark Van der Merwe, D. Berenson, Nima Fazeli
Many manipulation tasks require the robot to control the contact between a grasped compliant tool and the environment, e.g. scraping a frying pan with a spatula. However, modeling tool-environment interaction is difficult, especially when the tool is compliant, and the robot cannot be expected to have the full geometry and physical properties (e.g., mass, stiffness, and friction) of all the tools it must use. We propose a framework that learns to predict the effects of a robot's actions on the contact between the tool and the environment given visuo-tactile perception. Key to our framework is a novel contact feature representation that consists of a binary contact value, the line of contact, and an end-effector wrench. We propose a method to learn the dynamics of these contact features from real world data that does not require predicting the geometry of the compliant tool. We then propose a controller that uses this dynamics model for visuo-tactile contact servoing and show that it is effective at performing scraping tasks with a spatula, even in scenarios where precise contact needs to be made to avoid obstacles.
许多操作任务要求机器人控制抓取的顺应工具与环境之间的接触,例如用锅铲刮煎锅。然而,建模工具-环境的相互作用是困难的,特别是当工具是柔性的,并且不能期望机器人具有它必须使用的所有工具的完整几何和物理特性(例如,质量,刚度和摩擦)。我们提出了一个框架,该框架可以学习预测机器人的动作对工具与环境之间接触的影响,并给予视觉触觉感知。我们的框架的关键是一种新的接触特征表示,它由二进制接触值、接触线和末端执行器扳手组成。我们提出了一种方法来学习这些接触特征的动态从现实世界的数据,不需要预测的几何形状的顺应工具。然后,我们提出了一个使用该动态模型进行视觉触觉接触伺服的控制器,并表明它在使用刮刀执行刮削任务时是有效的,即使在需要精确接触以避开障碍物的情况下也是如此。
{"title":"Learning the Dynamics of Compliant Tool-Environment Interaction for Visuo-Tactile Contact Servoing","authors":"Mark Van der Merwe, D. Berenson, Nima Fazeli","doi":"10.48550/arXiv.2210.03836","DOIUrl":"https://doi.org/10.48550/arXiv.2210.03836","url":null,"abstract":"Many manipulation tasks require the robot to control the contact between a grasped compliant tool and the environment, e.g. scraping a frying pan with a spatula. However, modeling tool-environment interaction is difficult, especially when the tool is compliant, and the robot cannot be expected to have the full geometry and physical properties (e.g., mass, stiffness, and friction) of all the tools it must use. We propose a framework that learns to predict the effects of a robot's actions on the contact between the tool and the environment given visuo-tactile perception. Key to our framework is a novel contact feature representation that consists of a binary contact value, the line of contact, and an end-effector wrench. We propose a method to learn the dynamics of these contact features from real world data that does not require predicting the geometry of the compliant tool. We then propose a controller that uses this dynamics model for visuo-tactile contact servoing and show that it is effective at performing scraping tasks with a spatula, even in scenarios where precise contact needs to be made to avoid obstacles.","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133505114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
VIRDO++: Real-World, Visuo-tactile Dynamics and Perception of Deformable Objects VIRDO++:真实世界,视觉触觉动力学和可变形物体的感知
Pub Date : 2022-10-07 DOI: 10.48550/arXiv.2210.03701
Youngsun Wi, Andy Zeng, Peter R. Florence, Nima Fazeli
Deformable objects manipulation can benefit from representations that seamlessly integrate vision and touch while handling occlusions. In this work, we present a novel approach for, and real-world demonstration of, multimodal visuo-tactile state-estimation and dynamics prediction for deformable objects. Our approach, VIRDO++, builds on recent progress in multimodal neural implicit representations for deformable object state-estimation [1] via a new formulation for deformation dynamics and a complementary state-estimation algorithm that (i) maintains a belief over deformations, and (ii) enables practical real-world application by removing the need for privileged contact information. In the context of two real-world robotic tasks, we show:(i) high-fidelity cross-modal state-estimation and prediction of deformable objects from partial visuo-tactile feedback, and (ii) generalization to unseen objects and contact formations.
在处理遮挡时,可变形对象操作可以从无缝集成视觉和触觉的表示中受益。在这项工作中,我们提出了一种新的方法,并在现实世界中演示了可变形物体的多模态视觉-触觉状态估计和动态预测。我们的方法VIRDO++基于可变形物体状态估计的多模态神经隐式表示的最新进展[1],通过变形动力学的新公式和互补的状态估计算法,该算法(i)保持对变形的信念,(ii)通过消除对特权联系信息的需求来实现实际的现实应用。在两个现实世界机器人任务的背景下,我们展示了:(i)根据部分视觉触觉反馈对可变形物体进行高保真的跨模态状态估计和预测,以及(ii)对看不见的物体和接触形成的推广。
{"title":"VIRDO++: Real-World, Visuo-tactile Dynamics and Perception of Deformable Objects","authors":"Youngsun Wi, Andy Zeng, Peter R. Florence, Nima Fazeli","doi":"10.48550/arXiv.2210.03701","DOIUrl":"https://doi.org/10.48550/arXiv.2210.03701","url":null,"abstract":"Deformable objects manipulation can benefit from representations that seamlessly integrate vision and touch while handling occlusions. In this work, we present a novel approach for, and real-world demonstration of, multimodal visuo-tactile state-estimation and dynamics prediction for deformable objects. Our approach, VIRDO++, builds on recent progress in multimodal neural implicit representations for deformable object state-estimation [1] via a new formulation for deformation dynamics and a complementary state-estimation algorithm that (i) maintains a belief over deformations, and (ii) enables practical real-world application by removing the need for privileged contact information. In the context of two real-world robotic tasks, we show:(i) high-fidelity cross-modal state-estimation and prediction of deformable objects from partial visuo-tactile feedback, and (ii) generalization to unseen objects and contact formations.","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115079565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Real-World Robot Learning with Masked Visual Pre-training 基于蒙面视觉预训练的真实世界机器人学习
Pub Date : 2022-10-06 DOI: 10.48550/arXiv.2210.03109
Ilija Radosavovic, Tete Xiao, Stephen James, P. Abbeel, J. Malik, Trevor Darrell
In this work, we explore self-supervised visual pre-training on images from diverse, in-the-wild videos for real-world robotic tasks. Like prior work, our visual representations are pre-trained via a masked autoencoder (MAE), frozen, and then passed into a learnable control module. Unlike prior work, we show that the pre-trained representations are effective across a range of real-world robotic tasks and embodiments. We find that our encoder consistently outperforms CLIP (up to 75%), supervised ImageNet pre-training (up to 81%), and training from scratch (up to 81%). Finally, we train a 307M parameter vision transformer on a massive collection of 4.5M images from the Internet and egocentric videos, and demonstrate clearly the benefits of scaling visual pre-training for robot learning.
在这项工作中,我们探索了来自现实世界机器人任务的各种野外视频图像的自监督视觉预训练。与之前的工作一样,我们的视觉表征通过掩模自动编码器(MAE)进行预训练,冻结,然后传递到可学习的控制模块。与之前的工作不同,我们表明预训练的表征在一系列现实世界的机器人任务和实施例中是有效的。我们发现我们的编码器始终优于CLIP(高达75%),监督ImageNet预训练(高达81%)和从头开始训练(高达81%)。最后,我们在来自互联网和自我中心视频的450万图像的大量集合上训练了一个307M参数的视觉转换器,并清楚地展示了缩放视觉预训练对机器人学习的好处。
{"title":"Real-World Robot Learning with Masked Visual Pre-training","authors":"Ilija Radosavovic, Tete Xiao, Stephen James, P. Abbeel, J. Malik, Trevor Darrell","doi":"10.48550/arXiv.2210.03109","DOIUrl":"https://doi.org/10.48550/arXiv.2210.03109","url":null,"abstract":"In this work, we explore self-supervised visual pre-training on images from diverse, in-the-wild videos for real-world robotic tasks. Like prior work, our visual representations are pre-trained via a masked autoencoder (MAE), frozen, and then passed into a learnable control module. Unlike prior work, we show that the pre-trained representations are effective across a range of real-world robotic tasks and embodiments. We find that our encoder consistently outperforms CLIP (up to 75%), supervised ImageNet pre-training (up to 81%), and training from scratch (up to 81%). Finally, we train a 307M parameter vision transformer on a massive collection of 4.5M images from the Internet and egocentric videos, and demonstrate clearly the benefits of scaling visual pre-training for robot learning.","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129120343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 95
RAP: Risk-Aware Prediction for Robust Planning RAP:稳健规划的风险感知预测
Pub Date : 2022-10-04 DOI: 10.48550/arXiv.2210.01368
Haruki Nishimura, Jean-Pierre Mercat, Blake Wulfe, R. McAllister, Adrien Gaidon
Robust planning in interactive scenarios requires predicting the uncertain future to make risk-aware decisions. Unfortunately, due to long-tail safety-critical events, the risk is often under-estimated by finite-sampling approximations of probabilistic motion forecasts. This can lead to overconfident and unsafe robot behavior, even with robust planners. Instead of assuming full prediction coverage that robust planners require, we propose to make prediction itself risk-aware. We introduce a new prediction objective to learn a risk-biased distribution over trajectories, so that risk evaluation simplifies to an expected cost estimation under this biased distribution. This reduces the sample complexity of the risk estimation during online planning, which is needed for safe real-time performance. Evaluation results in a didactic simulation environment and on a real-world dataset demonstrate the effectiveness of our approach. The code and a demo are available.
交互式场景中的稳健规划需要预测不确定的未来,以做出具有风险意识的决策。不幸的是,由于长尾安全关键事件,风险往往被概率运动预测的有限采样近似低估。这可能导致过度自信和不安全的机器人行为,即使有强大的计划。我们建议让预测本身具有风险意识,而不是假设健壮的计划者所要求的完整预测覆盖范围。我们引入了一个新的预测目标来学习轨迹上的风险偏置分布,从而将风险评估简化为在该偏置分布下的期望成本估计。这降低了在线规划过程中风险估计的样本复杂性,这是安全实时性所需要的。在教学模拟环境和真实数据集上的评估结果证明了我们方法的有效性。代码和演示是可用的。
{"title":"RAP: Risk-Aware Prediction for Robust Planning","authors":"Haruki Nishimura, Jean-Pierre Mercat, Blake Wulfe, R. McAllister, Adrien Gaidon","doi":"10.48550/arXiv.2210.01368","DOIUrl":"https://doi.org/10.48550/arXiv.2210.01368","url":null,"abstract":"Robust planning in interactive scenarios requires predicting the uncertain future to make risk-aware decisions. Unfortunately, due to long-tail safety-critical events, the risk is often under-estimated by finite-sampling approximations of probabilistic motion forecasts. This can lead to overconfident and unsafe robot behavior, even with robust planners. Instead of assuming full prediction coverage that robust planners require, we propose to make prediction itself risk-aware. We introduce a new prediction objective to learn a risk-biased distribution over trajectories, so that risk evaluation simplifies to an expected cost estimation under this biased distribution. This reduces the sample complexity of the risk estimation during online planning, which is needed for safe real-time performance. Evaluation results in a didactic simulation environment and on a real-world dataset demonstrate the effectiveness of our approach. The code and a demo are available.","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124219556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Robustness Certification of Visual Perception Models via Camera Motion Smoothing 基于摄像机运动平滑的视觉感知模型鲁棒性认证
Pub Date : 2022-10-04 DOI: 10.48550/arXiv.2210.04625
Hanjiang Hu, Zuxin Liu, Linyi Li, Jiacheng Zhu, Ding Zhao
A vast literature shows that the learning-based visual perception model is sensitive to adversarial noises, but few works consider the robustness of robotic perception models under widely-existing camera motion perturbations. To this end, we study the robustness of the visual perception model under camera motion perturbations to investigate the influence of camera motion on robotic perception. Specifically, we propose a motion smoothing technique for arbitrary image classification models, whose robustness under camera motion perturbations could be certified. The proposed robustness certification framework based on camera motion smoothing provides tight and scalable robustness guarantees for visual perception modules so that they are applicable to wide robotic applications. As far as we are aware, this is the first work to provide robustness certification for the deep perception module against camera motions, which improves the trustworthiness of robotic perception. A realistic indoor robotic dataset with a dense point cloud map for the entire room, MetaRoom, is introduced for the challenging certifiable robust perception task. We conduct extensive experiments to validate the certification approach via motion smoothing against camera motion perturbations. Our framework guarantees the certified accuracy of 81.7% against camera translation perturbation along depth direction within -0.1m ~ 0.1m. We also validate the effectiveness of our method on the real-world robot by conducting hardware experiments on the robotic arm with an eye-in-hand camera. The code is available at https://github.com/HanjiangHu/camera-motion-smoothing.
大量文献表明,基于学习的视觉感知模型对敌对噪声敏感,但很少有研究考虑机器人感知模型在广泛存在的摄像机运动扰动下的鲁棒性。为此,我们研究了摄像机运动扰动下视觉感知模型的鲁棒性,以研究摄像机运动对机器人感知的影响。具体来说,我们提出了一种针对任意图像分类模型的运动平滑技术,该技术在摄像机运动扰动下的鲁棒性得到了验证。提出的基于摄像机运动平滑的鲁棒性认证框架为视觉感知模块提供了紧密和可扩展的鲁棒性保证,使其适用于广泛的机器人应用。据我们所知,这是第一次为深度感知模块提供针对相机运动的鲁棒性认证,这提高了机器人感知的可信度。一个真实的室内机器人数据集,具有整个房间的密集点云图,MetaRoom,用于具有挑战性的可认证鲁棒感知任务。我们进行了大量的实验来验证通过运动平滑对相机运动扰动的认证方法。我们的框架在-0.1m ~ 0.1m的深度方向上保证了81.7%的摄像机平移摄动认证精度。我们还通过在机器人手臂上使用眼手相机进行硬件实验,验证了我们的方法在现实世界机器人上的有效性。代码可在https://github.com/HanjiangHu/camera-motion-smoothing上获得。
{"title":"Robustness Certification of Visual Perception Models via Camera Motion Smoothing","authors":"Hanjiang Hu, Zuxin Liu, Linyi Li, Jiacheng Zhu, Ding Zhao","doi":"10.48550/arXiv.2210.04625","DOIUrl":"https://doi.org/10.48550/arXiv.2210.04625","url":null,"abstract":"A vast literature shows that the learning-based visual perception model is sensitive to adversarial noises, but few works consider the robustness of robotic perception models under widely-existing camera motion perturbations. To this end, we study the robustness of the visual perception model under camera motion perturbations to investigate the influence of camera motion on robotic perception. Specifically, we propose a motion smoothing technique for arbitrary image classification models, whose robustness under camera motion perturbations could be certified. The proposed robustness certification framework based on camera motion smoothing provides tight and scalable robustness guarantees for visual perception modules so that they are applicable to wide robotic applications. As far as we are aware, this is the first work to provide robustness certification for the deep perception module against camera motions, which improves the trustworthiness of robotic perception. A realistic indoor robotic dataset with a dense point cloud map for the entire room, MetaRoom, is introduced for the challenging certifiable robust perception task. We conduct extensive experiments to validate the certification approach via motion smoothing against camera motion perturbations. Our framework guarantees the certified accuracy of 81.7% against camera translation perturbation along depth direction within -0.1m ~ 0.1m. We also validate the effectiveness of our method on the real-world robot by conducting hardware experiments on the robotic arm with an eye-in-hand camera. The code is available at https://github.com/HanjiangHu/camera-motion-smoothing.","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123777300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Visuo-Tactile Transformers for Manipulation 用于操作的视觉-触觉变压器
Pub Date : 2022-09-30 DOI: 10.48550/arXiv.2210.00121
Yizhou Chen, A. Sipos, Mark Van der Merwe, Nima Fazeli
Learning representations in the joint domain of vision and touch can improve manipulation dexterity, robustness, and sample-complexity by exploiting mutual information and complementary cues. Here, we present Visuo-Tactile Transformers (VTTs), a novel multimodal representation learning approach suited for model-based reinforcement learning and planning. Our approach extends the Visual Transformer cite{dosovitskiy2021image} to handle visuo-tactile feedback. Specifically, VTT uses tactile feedback together with self and cross-modal attention to build latent heatmap representations that focus attention on important task features in the visual domain. We demonstrate the efficacy of VTT for representation learning with a comparative evaluation against baselines on four simulated robot tasks and one real world block pushing task. We conduct an ablation study over the components of VTT to highlight the importance of cross-modality in representation learning.
在视觉和触觉联合领域学习表征可以通过利用相互信息和互补线索来提高操作的灵活性、鲁棒性和样本复杂性。在这里,我们提出了视觉触觉变压器(VTTs),这是一种适用于基于模型的强化学习和规划的新型多模态表示学习方法。我们的方法扩展了Visual Transformer cite{dosovitskiy2021image}来处理视觉触觉反馈。具体来说,VTT使用触觉反馈和自我和跨模态注意来构建潜在热图表征,将注意力集中在视觉域的重要任务特征上。我们通过对四个模拟机器人任务和一个现实世界块推送任务的基线进行比较评估,证明了VTT对表示学习的有效性。我们对VTT的组成部分进行了消融研究,以强调跨模态在表征学习中的重要性。
{"title":"Visuo-Tactile Transformers for Manipulation","authors":"Yizhou Chen, A. Sipos, Mark Van der Merwe, Nima Fazeli","doi":"10.48550/arXiv.2210.00121","DOIUrl":"https://doi.org/10.48550/arXiv.2210.00121","url":null,"abstract":"Learning representations in the joint domain of vision and touch can improve manipulation dexterity, robustness, and sample-complexity by exploiting mutual information and complementary cues. Here, we present Visuo-Tactile Transformers (VTTs), a novel multimodal representation learning approach suited for model-based reinforcement learning and planning. Our approach extends the Visual Transformer cite{dosovitskiy2021image} to handle visuo-tactile feedback. Specifically, VTT uses tactile feedback together with self and cross-modal attention to build latent heatmap representations that focus attention on important task features in the visual domain. We demonstrate the efficacy of VTT for representation learning with a comparative evaluation against baselines on four simulated robot tasks and one real world block pushing task. We conduct an ablation study over the components of VTT to highlight the importance of cross-modality in representation learning.","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129232770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Manipulation via Membranes: High-Resolution and Highly Deformable Tactile Sensing and Control 操纵膜:高分辨率和高度可变形的触觉传感和控制
Pub Date : 2022-09-27 DOI: 10.48550/arXiv.2209.13432
M. Oller, M. Planas, D. Berenson, Nima Fazeli
Collocated tactile sensing is a fundamental enabling technology for dexterous manipulation. However, deformable sensors introduce complex dynamics between the robot, grasped object, and environment that must be considered for fine manipulation. Here, we propose a method to learn soft tactile sensor membrane dynamics that accounts for sensor deformations caused by the physical interaction between the grasped object and environment. Our method combines the perceived 3D geometry of the membrane with proprioceptive reaction wrenches to predict future deformations conditioned on robot action. Grasped object poses are recovered from membrane geometry and reaction wrenches, decoupling interaction dynamics from the tactile observation model. We benchmark our approach on two real-world contact-rich tasks: drawing with a grasped marker and in-hand pivoting. Our results suggest that explicitly modeling membrane dynamics achieves better task performance and generalization to unseen objects than baselines.
错位触觉感应是实现灵巧操作的基本技术。然而,可变形传感器引入了机器人、被抓物体和环境之间的复杂动力学,这是精细操作必须考虑的问题。在这里,我们提出了一种学习软触觉传感器膜动力学的方法,该方法考虑了被抓物体和环境之间的物理相互作用引起的传感器变形。我们的方法将感知到的膜的三维几何形状与本体感觉反应扳手结合起来,以预测机器人动作条件下的未来变形。抓取物体的姿态从膜几何和反作用力中恢复,将相互作用动力学从触觉观察模型中解耦。我们在两个现实世界中接触丰富的任务上对我们的方法进行了基准测试:用抓着的标记画画和用手旋转。我们的研究结果表明,与基线相比,明确建模膜动力学可以获得更好的任务性能和对未见物体的泛化。
{"title":"Manipulation via Membranes: High-Resolution and Highly Deformable Tactile Sensing and Control","authors":"M. Oller, M. Planas, D. Berenson, Nima Fazeli","doi":"10.48550/arXiv.2209.13432","DOIUrl":"https://doi.org/10.48550/arXiv.2209.13432","url":null,"abstract":"Collocated tactile sensing is a fundamental enabling technology for dexterous manipulation. However, deformable sensors introduce complex dynamics between the robot, grasped object, and environment that must be considered for fine manipulation. Here, we propose a method to learn soft tactile sensor membrane dynamics that accounts for sensor deformations caused by the physical interaction between the grasped object and environment. Our method combines the perceived 3D geometry of the membrane with proprioceptive reaction wrenches to predict future deformations conditioned on robot action. Grasped object poses are recovered from membrane geometry and reaction wrenches, decoupling interaction dynamics from the tactile observation model. We benchmark our approach on two real-world contact-rich tasks: drawing with a grasped marker and in-hand pivoting. Our results suggest that explicitly modeling membrane dynamics achieves better task performance and generalization to unseen objects than baselines.","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127558611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Fast Lifelong Adaptive Inverse Reinforcement Learning from Demonstrations 快速终身自适应逆强化学习演示
Pub Date : 2022-09-24 DOI: 10.48550/arXiv.2209.11908
Letian Chen, Sravan Jayanthi, Rohan R. Paleja, Daniel Martin, Viacheslav Zakharov, M. Gombolay
Learning from Demonstration (LfD) approaches empower end-users to teach robots novel tasks via demonstrations of the desired behaviors, democratizing access to robotics. However, current LfD frameworks are not capable of fast adaptation to heterogeneous human demonstrations nor the large-scale deployment in ubiquitous robotics applications. In this paper, we propose a novel LfD framework, Fast Lifelong Adaptive Inverse Reinforcement learning (FLAIR). Our approach (1) leverages learned strategies to construct policy mixtures for fast adaptation to new demonstrations, allowing for quick end-user personalization, (2) distills common knowledge across demonstrations, achieving accurate task inference; and (3) expands its model only when needed in lifelong deployments, maintaining a concise set of prototypical strategies that can approximate all behaviors via policy mixtures. We empirically validate that FLAIR achieves adaptability (i.e., the robot adapts to heterogeneous, user-specific task preferences), efficiency (i.e., the robot achieves sample-efficient adaptation), and scalability (i.e., the model grows sublinearly with the number of demonstrations while maintaining high performance). FLAIR surpasses benchmarks across three control tasks with an average 57% improvement in policy returns and an average 78% fewer episodes required for demonstration modeling using policy mixtures. Finally, we demonstrate the success of FLAIR in a table tennis task and find users rate FLAIR as having higher task (p<.05) and personalization (p<.05) performance.
从演示中学习(LfD)方法使最终用户能够通过演示所需的行为来教授机器人新的任务,从而使机器人的使用民主化。然而,目前的LfD框架不能快速适应异类的人类演示,也不能在无处不在的机器人应用中大规模部署。在本文中,我们提出了一个新的LfD框架,快速终身自适应逆强化学习(FLAIR)。我们的方法(1)利用学习到的策略来构建策略混合物,以便快速适应新的演示,允许快速的最终用户个性化;(2)提取演示中的共同知识,实现准确的任务推理;(3)仅在终身部署需要时扩展其模型,维护一组简明的原型策略,可以通过策略混合近似所有行为。我们通过经验验证了FLAIR实现了适应性(即机器人适应异构的、用户特定的任务偏好)、效率(即机器人实现了样本高效适应)和可扩展性(即模型随着演示次数的增加而次线性增长,同时保持高性能)。FLAIR在三个控制任务中超过基准,策略回报平均提高57%,使用策略混合演示建模所需的集平均减少78%。最后,我们证明了FLAIR在乒乓球任务中的成功,并发现用户认为FLAIR具有更高的任务(p< 0.05)和个性化(p< 0.05)性能。
{"title":"Fast Lifelong Adaptive Inverse Reinforcement Learning from Demonstrations","authors":"Letian Chen, Sravan Jayanthi, Rohan R. Paleja, Daniel Martin, Viacheslav Zakharov, M. Gombolay","doi":"10.48550/arXiv.2209.11908","DOIUrl":"https://doi.org/10.48550/arXiv.2209.11908","url":null,"abstract":"Learning from Demonstration (LfD) approaches empower end-users to teach robots novel tasks via demonstrations of the desired behaviors, democratizing access to robotics. However, current LfD frameworks are not capable of fast adaptation to heterogeneous human demonstrations nor the large-scale deployment in ubiquitous robotics applications. In this paper, we propose a novel LfD framework, Fast Lifelong Adaptive Inverse Reinforcement learning (FLAIR). Our approach (1) leverages learned strategies to construct policy mixtures for fast adaptation to new demonstrations, allowing for quick end-user personalization, (2) distills common knowledge across demonstrations, achieving accurate task inference; and (3) expands its model only when needed in lifelong deployments, maintaining a concise set of prototypical strategies that can approximate all behaviors via policy mixtures. We empirically validate that FLAIR achieves adaptability (i.e., the robot adapts to heterogeneous, user-specific task preferences), efficiency (i.e., the robot achieves sample-efficient adaptation), and scalability (i.e., the model grows sublinearly with the number of demonstrations while maintaining high performance). FLAIR surpasses benchmarks across three control tasks with an average 57% improvement in policy returns and an average 78% fewer episodes required for demonstration modeling using policy mixtures. Finally, we demonstrate the success of FLAIR in a table tennis task and find users rate FLAIR as having higher task (p<.05) and personalization (p<.05) performance.","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121991637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
GLSO: Grammar-guided Latent Space Optimization for Sample-efficient Robot Design Automation 基于语法引导的高效样本机器人设计自动化的潜在空间优化
Pub Date : 2022-09-23 DOI: 10.48550/arXiv.2209.11748
Jiaheng Hu, Julian Whiman, H. Choset
Robots have been used in all sorts of automation, and yet the design of robots remains mainly a manual task. We seek to provide design tools to automate the design of robots themselves. An important challenge in robot design automation is the large and complex design search space which grows exponentially with the number of components, making optimization difficult and sample inefficient. In this work, we present Grammar-guided Latent Space Optimization (GLSO), a framework that transforms design automation into a low-dimensional continuous optimization problem by training a graph variational autoencoder (VAE) to learn a mapping between the graph-structured design space and a continuous latent space. This transformation allows optimization to be conducted in a continuous latent space, where sample efficiency can be significantly boosted by applying algorithms such as Bayesian Optimization. GLSO guides training of the VAE using graph grammar rules and robot world space features, such that the learned latent space focus on valid robots and is easier for the optimization algorithm to explore. Importantly, the trained VAE can be reused to search for designs specialized to multiple different tasks without retraining. We evaluate GLSO by designing robots for a set of locomotion tasks in simulation, and demonstrate that our method outperforms related state-of-the-art robot design automation methods.
机器人已被用于各种自动化,但机器人的设计仍然主要是一项手工任务。我们寻求提供设计工具来自动化机器人本身的设计。机器人设计自动化面临的一个重要挑战是庞大而复杂的设计搜索空间,该空间随着部件数量呈指数级增长,使得优化变得困难且采样效率低下。在这项工作中,我们提出了语法引导的潜在空间优化(GLSO),这是一个框架,通过训练图变分自编码器(VAE)来学习图结构设计空间与连续潜在空间之间的映射,将设计自动化转化为低维连续优化问题。这种转换允许在连续潜在空间中进行优化,在连续潜在空间中,通过应用贝叶斯优化等算法可以显著提高样本效率。GLSO使用图语法规则和机器人世界空间特征指导VAE的训练,使得学习到的潜在空间集中在有效的机器人上,更易于优化算法探索。重要的是,经过训练的VAE可以被重用,以搜索专门用于多个不同任务的设计,而无需再训练。我们通过在模拟中设计一系列运动任务的机器人来评估GLSO,并证明我们的方法优于相关的最先进的机器人设计自动化方法。
{"title":"GLSO: Grammar-guided Latent Space Optimization for Sample-efficient Robot Design Automation","authors":"Jiaheng Hu, Julian Whiman, H. Choset","doi":"10.48550/arXiv.2209.11748","DOIUrl":"https://doi.org/10.48550/arXiv.2209.11748","url":null,"abstract":"Robots have been used in all sorts of automation, and yet the design of robots remains mainly a manual task. We seek to provide design tools to automate the design of robots themselves. An important challenge in robot design automation is the large and complex design search space which grows exponentially with the number of components, making optimization difficult and sample inefficient. In this work, we present Grammar-guided Latent Space Optimization (GLSO), a framework that transforms design automation into a low-dimensional continuous optimization problem by training a graph variational autoencoder (VAE) to learn a mapping between the graph-structured design space and a continuous latent space. This transformation allows optimization to be conducted in a continuous latent space, where sample efficiency can be significantly boosted by applying algorithms such as Bayesian Optimization. GLSO guides training of the VAE using graph grammar rules and robot world space features, such that the learned latent space focus on valid robots and is easier for the optimization algorithm to explore. Importantly, the trained VAE can be reused to search for designs specialized to multiple different tasks without retraining. We evaluate GLSO by designing robots for a set of locomotion tasks in simulation, and demonstrate that our method outperforms related state-of-the-art robot design automation methods.","PeriodicalId":273870,"journal":{"name":"Conference on Robot Learning","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116891296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Conference on Robot Learning
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1