{"title":"Modeling Adaptive Expression of Robot Learning Engagement and Exploring its Effects on Human Teachers","authors":"Shuai Ma, Mingfei Sun, Xiaojuan Ma","doi":"10.1145/3571813","DOIUrl":null,"url":null,"abstract":"Robot Learning from Demonstration (RLfD) allows non-expert users to teach a robot new skills or tasks directly through demonstrations. Although modeled after human-human learning and teaching, existing RLfD methods make robots act as passive observers without the feedback of their learning statuses in the demonstration gathering stage. To facilitate a more transparent teaching process, we propose two mechanisms of Learning Engagement, Z2O-Mode and D2O-Mode, to dynamically adapt robots’ attentional and behavioral engagement expressions to their actual learning status. Through an online user experiment with 48 participants, we find that, compared with two baselines, the two kinds of Learning Engagement can lead to users’ more accurate mental models of the robot’s learning progress, more positive perceptions of the robot, and better teaching experience. Finally, we provide implications for leveraging engagement expression to facilitate transparent human-AI (robot) communication based on our key findings.","PeriodicalId":50917,"journal":{"name":"ACM Transactions on Computer-Human Interaction","volume":" ","pages":""},"PeriodicalIF":4.8000,"publicationDate":"2022-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Computer-Human Interaction","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3571813","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 3
Abstract
Robot Learning from Demonstration (RLfD) allows non-expert users to teach a robot new skills or tasks directly through demonstrations. Although modeled after human-human learning and teaching, existing RLfD methods make robots act as passive observers without the feedback of their learning statuses in the demonstration gathering stage. To facilitate a more transparent teaching process, we propose two mechanisms of Learning Engagement, Z2O-Mode and D2O-Mode, to dynamically adapt robots’ attentional and behavioral engagement expressions to their actual learning status. Through an online user experiment with 48 participants, we find that, compared with two baselines, the two kinds of Learning Engagement can lead to users’ more accurate mental models of the robot’s learning progress, more positive perceptions of the robot, and better teaching experience. Finally, we provide implications for leveraging engagement expression to facilitate transparent human-AI (robot) communication based on our key findings.
期刊介绍:
This ACM Transaction seeks to be the premier archival journal in the multidisciplinary field of human-computer interaction. Since its first issue in March 1994, it has presented work of the highest scientific quality that contributes to the practice in the present and future. The primary emphasis is on results of broad application, but the journal considers original work focused on specific domains, on special requirements, on ethical issues -- the full range of design, development, and use of interactive systems.