Explaining robot policies

Applied AI letters Pub Date : 2021-11-13 DOI:10.1002/ail2.52
Olivia Watkins, Sandy Huang, Julius Frost, Kush Bhatia, Eric Weiner, Pieter Abbeel, Trevor Darrell, Bryan Plummer, Kate Saenko, Anca Dragan
{"title":"Explaining robot policies","authors":"Olivia Watkins,&nbsp;Sandy Huang,&nbsp;Julius Frost,&nbsp;Kush Bhatia,&nbsp;Eric Weiner,&nbsp;Pieter Abbeel,&nbsp;Trevor Darrell,&nbsp;Bryan Plummer,&nbsp;Kate Saenko,&nbsp;Anca Dragan","doi":"10.1002/ail2.52","DOIUrl":null,"url":null,"abstract":"<p>In order to interact with a robot or make wise decisions about where and how to deploy it in the real world, humans need to have an accurate mental model of how the robot acts in different situations. We propose to improve users' mental model of a robot by showing them examples of how the robot behaves in informative scenarios. We explore this in two settings. First, we show that when there are many possible environment states, users can more quickly understand the robot's policy if they are shown <i>critical states</i> where taking a particular action is important. Second, we show that when there is a distribution shift between training and test environment distributions, then it is more effective to show <i>exploratory states</i> that the robot does not visit naturally.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.52","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied AI letters","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ail2.52","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

In order to interact with a robot or make wise decisions about where and how to deploy it in the real world, humans need to have an accurate mental model of how the robot acts in different situations. We propose to improve users' mental model of a robot by showing them examples of how the robot behaves in informative scenarios. We explore this in two settings. First, we show that when there are many possible environment states, users can more quickly understand the robot's policy if they are shown critical states where taking a particular action is important. Second, we show that when there is a distribution shift between training and test environment distributions, then it is more effective to show exploratory states that the robot does not visit naturally.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
解释机器人政策
为了与机器人互动或做出明智的决定,决定在现实世界中部署机器人的位置和方式,人类需要对机器人在不同情况下的行为有一个准确的心理模型。我们建议通过向用户展示机器人在信息场景中的行为来改善用户对机器人的心理模型。我们在两种情况下对此进行探讨。首先,我们表明,当有许多可能的环境状态时,如果用户看到采取特定行动很重要的关键状态,他们可以更快地理解机器人的策略。其次,我们证明了当训练环境和测试环境之间的分布发生变化时,显示机器人不自然访问的探索状态更有效。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Issue Information Fine-Tuned Pretrained Transformer for Amharic News Headline Generation TL-GNN: Android Malware Detection Using Transfer Learning Issue Information Building Text and Speech Benchmark Datasets and Models for Low-Resourced East African Languages: Experiences and Lessons
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1