Learning Autonomous Viewpoint Adjustment from Human Demonstrations for Telemanipulation

Ruixing Jia, Lei Yang, Ying Cao, Calvin Kalun Or, Wenping Wang, Jia Pan
{"title":"Learning Autonomous Viewpoint Adjustment from Human Demonstrations for Telemanipulation","authors":"Ruixing Jia, Lei Yang, Ying Cao, Calvin Kalun Or, Wenping Wang, Jia Pan","doi":"10.1145/3660348","DOIUrl":null,"url":null,"abstract":"Teleoperation systems find many applications from earlier search-and-rescue to more recent daily tasks. It is widely acknowledged that using external sensors can decouple the view of the remote scene from the motion of the robot arm during manipulation, facilitating the control task. However, this design requires the coordination of multiple operators or may exhaust a single operator as s/he needs to control both the manipulator arm and the external sensors. To address this challenge, our work introduces a viewpoint prediction model, the first data-driven approach that autonomously adjusts the viewpoint of a dynamic camera to assist in telemanipulation tasks. This model is parameterized by a deep neural network and trained on a set of human demonstrations. We propose a contrastive learning scheme that leverages viewpoints in a camera trajectory as contrastive data for network training. We demonstrated the effectiveness of the proposed viewpoint prediction model by integrating it into a real-world robotic system for telemanipulation. User studies reveal that our model outperforms several camera control methods in terms of control experience and reduces the perceived task load compared to manual camera control. As an assistive module of a telemanipulation system, our method significantly reduces task completion time for users who choose to adopt its recommendation.","PeriodicalId":504644,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Human-Robot Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3660348","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Teleoperation systems find many applications from earlier search-and-rescue to more recent daily tasks. It is widely acknowledged that using external sensors can decouple the view of the remote scene from the motion of the robot arm during manipulation, facilitating the control task. However, this design requires the coordination of multiple operators or may exhaust a single operator as s/he needs to control both the manipulator arm and the external sensors. To address this challenge, our work introduces a viewpoint prediction model, the first data-driven approach that autonomously adjusts the viewpoint of a dynamic camera to assist in telemanipulation tasks. This model is parameterized by a deep neural network and trained on a set of human demonstrations. We propose a contrastive learning scheme that leverages viewpoints in a camera trajectory as contrastive data for network training. We demonstrated the effectiveness of the proposed viewpoint prediction model by integrating it into a real-world robotic system for telemanipulation. User studies reveal that our model outperforms several camera control methods in terms of control experience and reduces the perceived task load compared to manual camera control. As an assistive module of a telemanipulation system, our method significantly reduces task completion time for users who choose to adopt its recommendation.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
从人类示范中学习自主视角调整,实现远程操控
远程操纵系统应用广泛,从早期的搜索和救援到最近的日常任务,不一而足。人们普遍认为,使用外部传感器可以在操纵过程中将远程场景的视图与机械臂的运动分离开来,从而为控制任务提供便利。然而,这种设计需要多名操作员的协调配合,也可能使单个操作员疲于奔命,因为他/她需要同时控制机械臂和外部传感器。为了应对这一挑战,我们的工作引入了视点预测模型,这是第一种数据驱动的方法,可自主调整动态摄像机的视点,以协助远程操纵任务。该模型通过深度神经网络进行参数化,并在一组人类演示中进行训练。我们提出了一种对比学习方案,利用摄像机轨迹中的视点作为网络训练的对比数据。我们将所提出的视点预测模型集成到现实世界的远程操控机器人系统中,从而证明了该模型的有效性。用户研究表明,就控制体验而言,我们的模型优于几种相机控制方法,而且与手动相机控制相比,我们的模型减轻了用户的任务负担。作为远距离操纵系统的辅助模块,我们的方法能显著缩短选择采用其建议的用户的任务完成时间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Field Trial of a Queue-Managing Security Guard Robot Introduction to the Special Issue on Artificial Intelligence for Human-Robot Interaction (AI-HRI) Enacting Human-Robot Encounters with Theater Professionals on a Mixed Reality Stage Understanding the Interaction between Delivery Robots and Other Road and Sidewalk Users: A Study of User-generated Online Videos Longitudinal Study of Mobile Telepresence Robots in Older Adults’ Homes: Uses, Social Connection, and Comfort with Technology
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1